Test Report: KVM_Linux_crio 19678

                    
                      8ef5536409705b0cbf1ed8a719bbf7f792426b16:2024-09-20:36299
                    
                

Test fail (19/221)

x
+
TestAddons/parallel/Registry (75.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.247656ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-vxc6t" [10b4cecb-c85b-45ef-8043-e88a81971d51] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.02051547s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bqdmf" [11ab987d-a80f-412a-8a15-03a5898a2e9e] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004235551s
addons_test.go:338: (dbg) Run:  kubectl --context addons-446299 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-446299 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-446299 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.080240833s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-446299 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-446299 ip
2024/09/20 18:24:10 [DEBUG] GET http://192.168.39.237:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-446299 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-446299 -n addons-446299
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-446299 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-446299 logs -n 25: (1.431002418s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | -p download-only-675466                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-675466                                                                     | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| start   | -o=json --download-only                                                                     | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | -p download-only-363869                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-363869                                                                     | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-675466                                                                     | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-363869                                                                     | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-747965 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | binary-mirror-747965                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39359                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-747965                                                                     | binary-mirror-747965 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| addons  | enable dashboard -p                                                                         | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-446299 --wait=true                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:22 UTC | 20 Sep 24 18:22 UTC |
	|         | -p addons-446299                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | -p addons-446299                                                                            |                      |         |         |                     |                     |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-446299 ssh cat                                                                       | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | /opt/local-path-provisioner/pvc-11168afa-d97c-4581-90a8-f19b354e2c35_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| ip      | addons-446299 ip                                                                            | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:24 UTC | 20 Sep 24 18:24 UTC |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:24 UTC | 20 Sep 24 18:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:12:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:12:45.452837  749135 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:12:45.452957  749135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:45.452966  749135 out.go:358] Setting ErrFile to fd 2...
	I0920 18:12:45.452970  749135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:45.453156  749135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:12:45.453777  749135 out.go:352] Setting JSON to false
	I0920 18:12:45.454793  749135 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6915,"bootTime":1726849050,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:12:45.454907  749135 start.go:139] virtualization: kvm guest
	I0920 18:12:45.457071  749135 out.go:177] * [addons-446299] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:12:45.458344  749135 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:12:45.458335  749135 notify.go:220] Checking for updates...
	I0920 18:12:45.459761  749135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:12:45.461106  749135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:12:45.462449  749135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:45.463737  749135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:12:45.465084  749135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:12:45.466379  749135 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:12:45.497434  749135 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:12:45.498519  749135 start.go:297] selected driver: kvm2
	I0920 18:12:45.498542  749135 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:12:45.498561  749135 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:12:45.499322  749135 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:12:45.499411  749135 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:12:45.513921  749135 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:12:45.513966  749135 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:12:45.514272  749135 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:12:45.514314  749135 cni.go:84] Creating CNI manager for ""
	I0920 18:12:45.514372  749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:12:45.514386  749135 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:12:45.514458  749135 start.go:340] cluster config:
	{Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:12:45.514600  749135 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:12:45.516315  749135 out.go:177] * Starting "addons-446299" primary control-plane node in "addons-446299" cluster
	I0920 18:12:45.517423  749135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:12:45.517447  749135 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:12:45.517459  749135 cache.go:56] Caching tarball of preloaded images
	I0920 18:12:45.517543  749135 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:12:45.517552  749135 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:12:45.517857  749135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json ...
	I0920 18:12:45.517880  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json: {Name:mkaa7e3a2b8a2d95cecdc721e4fd7f5310773e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:12:45.518032  749135 start.go:360] acquireMachinesLock for addons-446299: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:12:45.518095  749135 start.go:364] duration metric: took 46.763µs to acquireMachinesLock for "addons-446299"
	I0920 18:12:45.518131  749135 start.go:93] Provisioning new machine with config: &{Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:12:45.518195  749135 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:12:45.520537  749135 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 18:12:45.520688  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:12:45.520727  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:45.535639  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0920 18:12:45.536170  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:45.536786  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:12:45.536808  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:45.537162  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:45.537383  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:12:45.537540  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:12:45.537694  749135 start.go:159] libmachine.API.Create for "addons-446299" (driver="kvm2")
	I0920 18:12:45.537726  749135 client.go:168] LocalClient.Create starting
	I0920 18:12:45.537791  749135 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:12:45.635672  749135 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:12:45.854167  749135 main.go:141] libmachine: Running pre-create checks...
	I0920 18:12:45.854195  749135 main.go:141] libmachine: (addons-446299) Calling .PreCreateCheck
	I0920 18:12:45.854768  749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
	I0920 18:12:45.855238  749135 main.go:141] libmachine: Creating machine...
	I0920 18:12:45.855256  749135 main.go:141] libmachine: (addons-446299) Calling .Create
	I0920 18:12:45.855444  749135 main.go:141] libmachine: (addons-446299) Creating KVM machine...
	I0920 18:12:45.856800  749135 main.go:141] libmachine: (addons-446299) DBG | found existing default KVM network
	I0920 18:12:45.857584  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:45.857437  749157 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bb0}
	I0920 18:12:45.857661  749135 main.go:141] libmachine: (addons-446299) DBG | created network xml: 
	I0920 18:12:45.857685  749135 main.go:141] libmachine: (addons-446299) DBG | <network>
	I0920 18:12:45.857700  749135 main.go:141] libmachine: (addons-446299) DBG |   <name>mk-addons-446299</name>
	I0920 18:12:45.857710  749135 main.go:141] libmachine: (addons-446299) DBG |   <dns enable='no'/>
	I0920 18:12:45.857722  749135 main.go:141] libmachine: (addons-446299) DBG |   
	I0920 18:12:45.857736  749135 main.go:141] libmachine: (addons-446299) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:12:45.857749  749135 main.go:141] libmachine: (addons-446299) DBG |     <dhcp>
	I0920 18:12:45.857762  749135 main.go:141] libmachine: (addons-446299) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:12:45.857774  749135 main.go:141] libmachine: (addons-446299) DBG |     </dhcp>
	I0920 18:12:45.857784  749135 main.go:141] libmachine: (addons-446299) DBG |   </ip>
	I0920 18:12:45.857795  749135 main.go:141] libmachine: (addons-446299) DBG |   
	I0920 18:12:45.857805  749135 main.go:141] libmachine: (addons-446299) DBG | </network>
	I0920 18:12:45.857817  749135 main.go:141] libmachine: (addons-446299) DBG | 
	I0920 18:12:45.862810  749135 main.go:141] libmachine: (addons-446299) DBG | trying to create private KVM network mk-addons-446299 192.168.39.0/24...
	I0920 18:12:45.928127  749135 main.go:141] libmachine: (addons-446299) DBG | private KVM network mk-addons-446299 192.168.39.0/24 created
	I0920 18:12:45.928216  749135 main.go:141] libmachine: (addons-446299) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 ...
	I0920 18:12:45.928243  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:45.928106  749157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:45.928255  749135 main.go:141] libmachine: (addons-446299) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:12:45.928282  749135 main.go:141] libmachine: (addons-446299) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:12:46.198371  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.198204  749157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa...
	I0920 18:12:46.306630  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.306482  749157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/addons-446299.rawdisk...
	I0920 18:12:46.306662  749135 main.go:141] libmachine: (addons-446299) DBG | Writing magic tar header
	I0920 18:12:46.306673  749135 main.go:141] libmachine: (addons-446299) DBG | Writing SSH key tar header
	I0920 18:12:46.306681  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.306605  749157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 ...
	I0920 18:12:46.306695  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299
	I0920 18:12:46.306758  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 (perms=drwx------)
	I0920 18:12:46.306798  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:12:46.306816  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:12:46.306825  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:12:46.306839  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:46.306872  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:12:46.306884  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:12:46.306904  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:12:46.306929  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:12:46.306939  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:12:46.306952  749135 main.go:141] libmachine: (addons-446299) Creating domain...
	I0920 18:12:46.306963  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:12:46.306969  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home
	I0920 18:12:46.306976  749135 main.go:141] libmachine: (addons-446299) DBG | Skipping /home - not owner
	I0920 18:12:46.308063  749135 main.go:141] libmachine: (addons-446299) define libvirt domain using xml: 
	I0920 18:12:46.308090  749135 main.go:141] libmachine: (addons-446299) <domain type='kvm'>
	I0920 18:12:46.308100  749135 main.go:141] libmachine: (addons-446299)   <name>addons-446299</name>
	I0920 18:12:46.308107  749135 main.go:141] libmachine: (addons-446299)   <memory unit='MiB'>4000</memory>
	I0920 18:12:46.308114  749135 main.go:141] libmachine: (addons-446299)   <vcpu>2</vcpu>
	I0920 18:12:46.308128  749135 main.go:141] libmachine: (addons-446299)   <features>
	I0920 18:12:46.308136  749135 main.go:141] libmachine: (addons-446299)     <acpi/>
	I0920 18:12:46.308144  749135 main.go:141] libmachine: (addons-446299)     <apic/>
	I0920 18:12:46.308150  749135 main.go:141] libmachine: (addons-446299)     <pae/>
	I0920 18:12:46.308156  749135 main.go:141] libmachine: (addons-446299)     
	I0920 18:12:46.308161  749135 main.go:141] libmachine: (addons-446299)   </features>
	I0920 18:12:46.308167  749135 main.go:141] libmachine: (addons-446299)   <cpu mode='host-passthrough'>
	I0920 18:12:46.308172  749135 main.go:141] libmachine: (addons-446299)   
	I0920 18:12:46.308184  749135 main.go:141] libmachine: (addons-446299)   </cpu>
	I0920 18:12:46.308194  749135 main.go:141] libmachine: (addons-446299)   <os>
	I0920 18:12:46.308203  749135 main.go:141] libmachine: (addons-446299)     <type>hvm</type>
	I0920 18:12:46.308221  749135 main.go:141] libmachine: (addons-446299)     <boot dev='cdrom'/>
	I0920 18:12:46.308234  749135 main.go:141] libmachine: (addons-446299)     <boot dev='hd'/>
	I0920 18:12:46.308243  749135 main.go:141] libmachine: (addons-446299)     <bootmenu enable='no'/>
	I0920 18:12:46.308250  749135 main.go:141] libmachine: (addons-446299)   </os>
	I0920 18:12:46.308255  749135 main.go:141] libmachine: (addons-446299)   <devices>
	I0920 18:12:46.308262  749135 main.go:141] libmachine: (addons-446299)     <disk type='file' device='cdrom'>
	I0920 18:12:46.308277  749135 main.go:141] libmachine: (addons-446299)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/boot2docker.iso'/>
	I0920 18:12:46.308290  749135 main.go:141] libmachine: (addons-446299)       <target dev='hdc' bus='scsi'/>
	I0920 18:12:46.308302  749135 main.go:141] libmachine: (addons-446299)       <readonly/>
	I0920 18:12:46.308312  749135 main.go:141] libmachine: (addons-446299)     </disk>
	I0920 18:12:46.308324  749135 main.go:141] libmachine: (addons-446299)     <disk type='file' device='disk'>
	I0920 18:12:46.308335  749135 main.go:141] libmachine: (addons-446299)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:12:46.308350  749135 main.go:141] libmachine: (addons-446299)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/addons-446299.rawdisk'/>
	I0920 18:12:46.308364  749135 main.go:141] libmachine: (addons-446299)       <target dev='hda' bus='virtio'/>
	I0920 18:12:46.308376  749135 main.go:141] libmachine: (addons-446299)     </disk>
	I0920 18:12:46.308386  749135 main.go:141] libmachine: (addons-446299)     <interface type='network'>
	I0920 18:12:46.308395  749135 main.go:141] libmachine: (addons-446299)       <source network='mk-addons-446299'/>
	I0920 18:12:46.308404  749135 main.go:141] libmachine: (addons-446299)       <model type='virtio'/>
	I0920 18:12:46.308414  749135 main.go:141] libmachine: (addons-446299)     </interface>
	I0920 18:12:46.308424  749135 main.go:141] libmachine: (addons-446299)     <interface type='network'>
	I0920 18:12:46.308440  749135 main.go:141] libmachine: (addons-446299)       <source network='default'/>
	I0920 18:12:46.308454  749135 main.go:141] libmachine: (addons-446299)       <model type='virtio'/>
	I0920 18:12:46.308462  749135 main.go:141] libmachine: (addons-446299)     </interface>
	I0920 18:12:46.308467  749135 main.go:141] libmachine: (addons-446299)     <serial type='pty'>
	I0920 18:12:46.308472  749135 main.go:141] libmachine: (addons-446299)       <target port='0'/>
	I0920 18:12:46.308478  749135 main.go:141] libmachine: (addons-446299)     </serial>
	I0920 18:12:46.308486  749135 main.go:141] libmachine: (addons-446299)     <console type='pty'>
	I0920 18:12:46.308493  749135 main.go:141] libmachine: (addons-446299)       <target type='serial' port='0'/>
	I0920 18:12:46.308498  749135 main.go:141] libmachine: (addons-446299)     </console>
	I0920 18:12:46.308504  749135 main.go:141] libmachine: (addons-446299)     <rng model='virtio'>
	I0920 18:12:46.308512  749135 main.go:141] libmachine: (addons-446299)       <backend model='random'>/dev/random</backend>
	I0920 18:12:46.308518  749135 main.go:141] libmachine: (addons-446299)     </rng>
	I0920 18:12:46.308522  749135 main.go:141] libmachine: (addons-446299)     
	I0920 18:12:46.308528  749135 main.go:141] libmachine: (addons-446299)     
	I0920 18:12:46.308544  749135 main.go:141] libmachine: (addons-446299)   </devices>
	I0920 18:12:46.308556  749135 main.go:141] libmachine: (addons-446299) </domain>
	I0920 18:12:46.308574  749135 main.go:141] libmachine: (addons-446299) 
	I0920 18:12:46.314191  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:13:6e:16 in network default
	I0920 18:12:46.314696  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:46.314712  749135 main.go:141] libmachine: (addons-446299) Ensuring networks are active...
	I0920 18:12:46.315254  749135 main.go:141] libmachine: (addons-446299) Ensuring network default is active
	I0920 18:12:46.315494  749135 main.go:141] libmachine: (addons-446299) Ensuring network mk-addons-446299 is active
	I0920 18:12:46.315890  749135 main.go:141] libmachine: (addons-446299) Getting domain xml...
	I0920 18:12:46.316428  749135 main.go:141] libmachine: (addons-446299) Creating domain...
	I0920 18:12:47.702575  749135 main.go:141] libmachine: (addons-446299) Waiting to get IP...
	I0920 18:12:47.703586  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:47.704120  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:47.704148  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:47.704086  749157 retry.go:31] will retry after 271.659022ms: waiting for machine to come up
	I0920 18:12:47.977759  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:47.978244  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:47.978271  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:47.978199  749157 retry.go:31] will retry after 286.269777ms: waiting for machine to come up
	I0920 18:12:48.265706  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:48.266154  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:48.266176  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:48.266104  749157 retry.go:31] will retry after 302.528012ms: waiting for machine to come up
	I0920 18:12:48.570875  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:48.571362  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:48.571386  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:48.571312  749157 retry.go:31] will retry after 579.846713ms: waiting for machine to come up
	I0920 18:12:49.153045  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:49.153478  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:49.153506  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:49.153418  749157 retry.go:31] will retry after 501.770816ms: waiting for machine to come up
	I0920 18:12:49.657032  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:49.657383  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:49.657410  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:49.657355  749157 retry.go:31] will retry after 903.967154ms: waiting for machine to come up
	I0920 18:12:50.562781  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:50.563350  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:50.563375  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:50.563286  749157 retry.go:31] will retry after 1.03177474s: waiting for machine to come up
	I0920 18:12:51.596424  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:51.596850  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:51.596971  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:51.596890  749157 retry.go:31] will retry after 1.278733336s: waiting for machine to come up
	I0920 18:12:52.877368  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:52.877732  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:52.877761  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:52.877690  749157 retry.go:31] will retry after 1.241144447s: waiting for machine to come up
	I0920 18:12:54.121228  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:54.121598  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:54.121623  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:54.121564  749157 retry.go:31] will retry after 2.253509148s: waiting for machine to come up
	I0920 18:12:56.377139  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:56.377598  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:56.377630  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:56.377537  749157 retry.go:31] will retry after 2.563830681s: waiting for machine to come up
	I0920 18:12:58.944264  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:58.944679  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:58.944723  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:58.944624  749157 retry.go:31] will retry after 2.392098661s: waiting for machine to come up
	I0920 18:13:01.339634  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:01.340032  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:13:01.340088  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:13:01.339990  749157 retry.go:31] will retry after 2.800869076s: waiting for machine to come up
	I0920 18:13:04.142006  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:04.142476  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:13:04.142500  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:13:04.142411  749157 retry.go:31] will retry after 4.101773144s: waiting for machine to come up
	I0920 18:13:08.247401  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.247831  749135 main.go:141] libmachine: (addons-446299) Found IP for machine: 192.168.39.237
	I0920 18:13:08.247867  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has current primary IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.247875  749135 main.go:141] libmachine: (addons-446299) Reserving static IP address...
	I0920 18:13:08.248197  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find host DHCP lease matching {name: "addons-446299", mac: "52:54:00:33:9c:3e", ip: "192.168.39.237"} in network mk-addons-446299
	I0920 18:13:08.320366  749135 main.go:141] libmachine: (addons-446299) DBG | Getting to WaitForSSH function...
	I0920 18:13:08.320400  749135 main.go:141] libmachine: (addons-446299) Reserved static IP address: 192.168.39.237
	I0920 18:13:08.320413  749135 main.go:141] libmachine: (addons-446299) Waiting for SSH to be available...
	I0920 18:13:08.323450  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.323840  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.323876  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.324043  749135 main.go:141] libmachine: (addons-446299) DBG | Using SSH client type: external
	I0920 18:13:08.324075  749135 main.go:141] libmachine: (addons-446299) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa (-rw-------)
	I0920 18:13:08.324116  749135 main.go:141] libmachine: (addons-446299) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.237 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:13:08.324134  749135 main.go:141] libmachine: (addons-446299) DBG | About to run SSH command:
	I0920 18:13:08.324145  749135 main.go:141] libmachine: (addons-446299) DBG | exit 0
	I0920 18:13:08.447247  749135 main.go:141] libmachine: (addons-446299) DBG | SSH cmd err, output: <nil>: 
	I0920 18:13:08.447526  749135 main.go:141] libmachine: (addons-446299) KVM machine creation complete!
	I0920 18:13:08.447847  749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
	I0920 18:13:08.448509  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:08.448699  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:08.448836  749135 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:13:08.448855  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:08.450187  749135 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:13:08.450200  749135 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:13:08.450206  749135 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:13:08.450212  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.452411  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.452723  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.452751  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.452850  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.453019  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.453174  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.453318  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.453492  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.453697  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.453711  749135 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:13:08.550007  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:13:08.550034  749135 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:13:08.550043  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.552709  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.553024  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.553055  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.553193  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.553387  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.553523  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.553628  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.553820  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.554035  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.554048  749135 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:13:08.651415  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:13:08.651508  749135 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:13:08.651519  749135 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:13:08.651527  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:13:08.651799  749135 buildroot.go:166] provisioning hostname "addons-446299"
	I0920 18:13:08.651833  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:13:08.652051  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.654630  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.654993  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.655016  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.655142  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.655325  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.655472  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.655580  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.655728  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.655930  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.655944  749135 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-446299 && echo "addons-446299" | sudo tee /etc/hostname
	I0920 18:13:08.764545  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-446299
	
	I0920 18:13:08.764579  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.767492  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.767918  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.767944  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.768198  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.768402  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.768591  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.768737  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.768929  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.769151  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.769174  749135 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-446299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-446299/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-446299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:13:08.875844  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:13:08.875886  749135 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:13:08.875933  749135 buildroot.go:174] setting up certificates
	I0920 18:13:08.875949  749135 provision.go:84] configureAuth start
	I0920 18:13:08.875963  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:13:08.876262  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:08.878744  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.879098  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.879119  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.879270  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.881403  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.881836  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.881865  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.881970  749135 provision.go:143] copyHostCerts
	I0920 18:13:08.882095  749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:13:08.882283  749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:13:08.882377  749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:13:08.882472  749135 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.addons-446299 san=[127.0.0.1 192.168.39.237 addons-446299 localhost minikube]
	I0920 18:13:09.208189  749135 provision.go:177] copyRemoteCerts
	I0920 18:13:09.208279  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:13:09.208315  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.211040  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.211327  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.211351  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.211544  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.211780  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.211947  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.212123  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.297180  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:13:09.320798  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:13:09.344012  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:13:09.366859  749135 provision.go:87] duration metric: took 490.878212ms to configureAuth
	I0920 18:13:09.366893  749135 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:13:09.367101  749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:13:09.367184  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.369576  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.369868  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.369896  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.370087  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.370268  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.370416  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.370568  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.370692  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:09.370898  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:09.370918  749135 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:13:09.580901  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:13:09.580930  749135 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:13:09.580938  749135 main.go:141] libmachine: (addons-446299) Calling .GetURL
	I0920 18:13:09.582415  749135 main.go:141] libmachine: (addons-446299) DBG | Using libvirt version 6000000
	I0920 18:13:09.584573  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.584892  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.584919  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.585053  749135 main.go:141] libmachine: Docker is up and running!
	I0920 18:13:09.585065  749135 main.go:141] libmachine: Reticulating splines...
	I0920 18:13:09.585073  749135 client.go:171] duration metric: took 24.047336599s to LocalClient.Create
	I0920 18:13:09.585100  749135 start.go:167] duration metric: took 24.047408021s to libmachine.API.Create "addons-446299"
	I0920 18:13:09.585116  749135 start.go:293] postStartSetup for "addons-446299" (driver="kvm2")
	I0920 18:13:09.585129  749135 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:13:09.585147  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.585408  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:13:09.585435  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.587350  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.587666  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.587695  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.587795  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.587993  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.588132  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.588235  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.664940  749135 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:13:09.669300  749135 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:13:09.669326  749135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:13:09.669399  749135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:13:09.669426  749135 start.go:296] duration metric: took 84.302482ms for postStartSetup
	I0920 18:13:09.669464  749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
	I0920 18:13:09.670097  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:09.672635  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.673027  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.673059  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.673292  749135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json ...
	I0920 18:13:09.673507  749135 start.go:128] duration metric: took 24.155298051s to createHost
	I0920 18:13:09.673535  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.675782  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.676085  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.676118  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.676239  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.676425  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.676577  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.676704  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.676850  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:09.677016  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:09.677026  749135 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:13:09.775435  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855989.751621835
	
	I0920 18:13:09.775464  749135 fix.go:216] guest clock: 1726855989.751621835
	I0920 18:13:09.775474  749135 fix.go:229] Guest: 2024-09-20 18:13:09.751621835 +0000 UTC Remote: 2024-09-20 18:13:09.673520947 +0000 UTC m=+24.255782208 (delta=78.100888ms)
	I0920 18:13:09.775526  749135 fix.go:200] guest clock delta is within tolerance: 78.100888ms
	I0920 18:13:09.775540  749135 start.go:83] releasing machines lock for "addons-446299", held for 24.257428579s
	I0920 18:13:09.775567  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.775862  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:09.778659  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.779012  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.779037  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.779220  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.779691  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.779841  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.779938  749135 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:13:09.779984  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.780090  749135 ssh_runner.go:195] Run: cat /version.json
	I0920 18:13:09.780115  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.782348  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.782682  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.782703  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.782721  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.782827  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.783033  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.783120  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.783141  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.783235  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.783325  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.783381  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.783467  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.783589  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.783728  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.855541  749135 ssh_runner.go:195] Run: systemctl --version
	I0920 18:13:09.885114  749135 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:13:10.038473  749135 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:13:10.044604  749135 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:13:10.044673  749135 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:13:10.061773  749135 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:13:10.061802  749135 start.go:495] detecting cgroup driver to use...
	I0920 18:13:10.061871  749135 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:13:10.078163  749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:13:10.092123  749135 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:13:10.092186  749135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:13:10.105354  749135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:13:10.118581  749135 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:13:10.228500  749135 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:13:10.385243  749135 docker.go:233] disabling docker service ...
	I0920 18:13:10.385317  749135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:13:10.399346  749135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:13:10.411799  749135 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:13:10.532538  749135 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:13:10.657590  749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:13:10.672417  749135 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:13:10.690910  749135 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:13:10.690989  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.701918  749135 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:13:10.702004  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.712909  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.723847  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.734707  749135 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:13:10.745859  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.756720  749135 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.781698  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.792301  749135 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:13:10.801512  749135 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:13:10.801614  749135 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:13:10.815061  749135 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:13:10.824568  749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:13:10.942263  749135 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:13:11.344964  749135 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:13:11.345085  749135 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:13:11.350594  749135 start.go:563] Will wait 60s for crictl version
	I0920 18:13:11.350677  749135 ssh_runner.go:195] Run: which crictl
	I0920 18:13:11.354600  749135 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:13:11.392003  749135 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:13:11.392112  749135 ssh_runner.go:195] Run: crio --version
	I0920 18:13:11.424468  749135 ssh_runner.go:195] Run: crio --version
	I0920 18:13:11.468344  749135 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:13:11.469889  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:11.472633  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:11.472955  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:11.472986  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:11.473236  749135 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:13:11.477639  749135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:13:11.490126  749135 kubeadm.go:883] updating cluster {Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:13:11.490246  749135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:13:11.490303  749135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:13:11.522179  749135 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:13:11.522257  749135 ssh_runner.go:195] Run: which lz4
	I0920 18:13:11.526368  749135 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:13:11.530534  749135 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:13:11.530569  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:13:12.754100  749135 crio.go:462] duration metric: took 1.227762585s to copy over tarball
	I0920 18:13:12.754195  749135 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:13:14.814758  749135 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060523421s)
	I0920 18:13:14.814798  749135 crio.go:469] duration metric: took 2.06066428s to extract the tarball
	I0920 18:13:14.814808  749135 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:13:14.850931  749135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:13:14.892855  749135 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:13:14.892884  749135 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:13:14.892894  749135 kubeadm.go:934] updating node { 192.168.39.237 8443 v1.31.1 crio true true} ...
	I0920 18:13:14.893002  749135 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-446299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:13:14.893069  749135 ssh_runner.go:195] Run: crio config
	I0920 18:13:14.935948  749135 cni.go:84] Creating CNI manager for ""
	I0920 18:13:14.935974  749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:13:14.935987  749135 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:13:14.936010  749135 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-446299 NodeName:addons-446299 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:13:14.936153  749135 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-446299"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:13:14.936224  749135 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:13:14.945879  749135 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:13:14.945951  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:13:14.955112  749135 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:13:14.971443  749135 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:13:14.987494  749135 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0920 18:13:15.004128  749135 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I0920 18:13:15.008311  749135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:13:15.020386  749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:13:15.143207  749135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:13:15.160928  749135 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299 for IP: 192.168.39.237
	I0920 18:13:15.160952  749135 certs.go:194] generating shared ca certs ...
	I0920 18:13:15.160971  749135 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.161127  749135 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:13:15.288325  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt ...
	I0920 18:13:15.288359  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt: {Name:mkd07e710befe398f359697123be87266dbb73cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.288526  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key ...
	I0920 18:13:15.288537  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key: {Name:mk8452559729a4e6fe54cdcaa3db5cb2d03b365d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.288610  749135 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:13:15.460720  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt ...
	I0920 18:13:15.460749  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt: {Name:mkd5912367400d11fe28d50162d9491c1c026ad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.460926  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key ...
	I0920 18:13:15.460946  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key: {Name:mk7b4a10567303413b299060d87451a86c82a4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.461047  749135 certs.go:256] generating profile certs ...
	I0920 18:13:15.461131  749135 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key
	I0920 18:13:15.461148  749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt with IP's: []
	I0920 18:13:15.666412  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt ...
	I0920 18:13:15.666455  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: {Name:mkef01489d7dcf2bfb46ac5af11bed50283fb691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.666668  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key ...
	I0920 18:13:15.666687  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key: {Name:mkce7236a454e2c0202c83ef853c169198fb2f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.666791  749135 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387
	I0920 18:13:15.666816  749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237]
	I0920 18:13:15.705625  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 ...
	I0920 18:13:15.705654  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387: {Name:mk64bf6bb73ff35990c8781efc3d30626dc3ca21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.705826  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387 ...
	I0920 18:13:15.705843  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387: {Name:mk18ead88f15a69013b31853d623fd0cb8c39466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.705941  749135 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt
	I0920 18:13:15.706040  749135 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key
	I0920 18:13:15.706114  749135 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key
	I0920 18:13:15.706140  749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt with IP's: []
	I0920 18:13:15.788260  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt ...
	I0920 18:13:15.788293  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt: {Name:mk5ff8fc31363db98a0f0ca7278de49be24b8420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.788475  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key ...
	I0920 18:13:15.788494  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key: {Name:mk7a90a72aaffce450a2196a523cb38d8ddfd4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.788714  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:13:15.788762  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:13:15.788796  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:13:15.788835  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:13:15.789513  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:13:15.814280  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:13:15.838979  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:13:15.861251  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:13:15.883772  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:13:15.906899  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:13:15.930055  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:13:15.952960  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:13:15.976078  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:13:15.998990  749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:13:16.015378  749135 ssh_runner.go:195] Run: openssl version
	I0920 18:13:16.021288  749135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:13:16.031743  749135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:13:16.036218  749135 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:13:16.036292  749135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:13:16.041983  749135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:13:16.052410  749135 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:13:16.056509  749135 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:13:16.056561  749135 kubeadm.go:392] StartCluster: {Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:13:16.056643  749135 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:13:16.056724  749135 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:13:16.093233  749135 cri.go:89] found id: ""
	I0920 18:13:16.093305  749135 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:13:16.103183  749135 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:13:16.112220  749135 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:13:16.121055  749135 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:13:16.121076  749135 kubeadm.go:157] found existing configuration files:
	
	I0920 18:13:16.121125  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:13:16.129727  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:13:16.129793  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:13:16.138769  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:13:16.147343  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:13:16.147401  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:13:16.156084  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:13:16.164356  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:13:16.164409  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:13:16.172957  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:13:16.181269  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:13:16.181319  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:13:16.189971  749135 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:13:16.241816  749135 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:13:16.242023  749135 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:13:16.343705  749135 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:13:16.343865  749135 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:13:16.344016  749135 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:13:16.353422  749135 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:13:16.356505  749135 out.go:235]   - Generating certificates and keys ...
	I0920 18:13:16.356621  749135 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:13:16.356707  749135 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:13:16.567905  749135 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:13:16.678138  749135 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:13:16.903150  749135 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:13:17.220781  749135 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:13:17.330970  749135 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:13:17.331262  749135 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-446299 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0920 18:13:17.404562  749135 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:13:17.404723  749135 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-446299 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0920 18:13:17.558748  749135 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:13:17.723982  749135 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:13:17.850510  749135 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:13:17.850712  749135 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:13:17.910185  749135 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:13:18.072173  749135 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:13:18.135494  749135 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:13:18.547143  749135 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:13:18.760484  749135 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:13:18.761203  749135 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:13:18.765007  749135 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:13:18.801126  749135 out.go:235]   - Booting up control plane ...
	I0920 18:13:18.801251  749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:13:18.801344  749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:13:18.801424  749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:13:18.801571  749135 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:13:18.801721  749135 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:13:18.801785  749135 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:13:18.927609  749135 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:13:18.927774  749135 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:13:19.928576  749135 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001817815s
	I0920 18:13:19.928734  749135 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:13:24.427415  749135 kubeadm.go:310] [api-check] The API server is healthy after 4.501490258s
	I0920 18:13:24.439460  749135 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:13:24.456660  749135 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:13:24.489726  749135 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:13:24.489974  749135 kubeadm.go:310] [mark-control-plane] Marking the node addons-446299 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:13:24.502419  749135 kubeadm.go:310] [bootstrap-token] Using token: 2qbco4.c4cth5cwyyzw51bf
	I0920 18:13:24.503870  749135 out.go:235]   - Configuring RBAC rules ...
	I0920 18:13:24.504029  749135 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:13:24.514334  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:13:24.520831  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:13:24.524418  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:13:24.527658  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:13:24.533751  749135 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:13:24.833210  749135 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:13:25.263206  749135 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:13:25.833304  749135 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:13:25.834184  749135 kubeadm.go:310] 
	I0920 18:13:25.834298  749135 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:13:25.834327  749135 kubeadm.go:310] 
	I0920 18:13:25.834438  749135 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:13:25.834450  749135 kubeadm.go:310] 
	I0920 18:13:25.834490  749135 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:13:25.834595  749135 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:13:25.834657  749135 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:13:25.834674  749135 kubeadm.go:310] 
	I0920 18:13:25.834745  749135 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:13:25.834754  749135 kubeadm.go:310] 
	I0920 18:13:25.834980  749135 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:13:25.834997  749135 kubeadm.go:310] 
	I0920 18:13:25.835059  749135 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:13:25.835163  749135 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:13:25.835253  749135 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:13:25.835263  749135 kubeadm.go:310] 
	I0920 18:13:25.835376  749135 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:13:25.835483  749135 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:13:25.835490  749135 kubeadm.go:310] 
	I0920 18:13:25.835595  749135 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2qbco4.c4cth5cwyyzw51bf \
	I0920 18:13:25.835757  749135 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d \
	I0920 18:13:25.835806  749135 kubeadm.go:310] 	--control-plane 
	I0920 18:13:25.835816  749135 kubeadm.go:310] 
	I0920 18:13:25.835914  749135 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:13:25.835926  749135 kubeadm.go:310] 
	I0920 18:13:25.836021  749135 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2qbco4.c4cth5cwyyzw51bf \
	I0920 18:13:25.836149  749135 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d 
	I0920 18:13:25.837593  749135 kubeadm.go:310] W0920 18:13:16.222475     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:13:25.837868  749135 kubeadm.go:310] W0920 18:13:16.223486     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:13:25.837990  749135 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:13:25.838019  749135 cni.go:84] Creating CNI manager for ""
	I0920 18:13:25.838028  749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:13:25.839751  749135 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:13:25.840949  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:13:25.852783  749135 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:13:25.871921  749135 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:13:25.871998  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:25.872010  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-446299 minikube.k8s.io/updated_at=2024_09_20T18_13_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-446299 minikube.k8s.io/primary=true
	I0920 18:13:25.893378  749135 ops.go:34] apiserver oom_adj: -16
	I0920 18:13:26.025723  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:26.526635  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:27.026038  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:27.526100  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:28.026195  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:28.526494  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:29.026560  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:29.526369  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:30.026015  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:30.116670  749135 kubeadm.go:1113] duration metric: took 4.244739753s to wait for elevateKubeSystemPrivileges
	I0920 18:13:30.116706  749135 kubeadm.go:394] duration metric: took 14.06015239s to StartCluster
	I0920 18:13:30.116726  749135 settings.go:142] acquiring lock: {Name:mk0bd1e421bf437575c076c52c1ff2f74497a1ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:30.116861  749135 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:13:30.117227  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/kubeconfig: {Name:mk275c54cf52b0ccdc22fcaa39c7b9c31092c648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:30.117422  749135 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:13:30.117448  749135 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:13:30.117512  749135 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 18:13:30.117640  749135 addons.go:69] Setting yakd=true in profile "addons-446299"
	I0920 18:13:30.117667  749135 addons.go:234] Setting addon yakd=true in "addons-446299"
	I0920 18:13:30.117700  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117727  749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:13:30.117688  749135 addons.go:69] Setting default-storageclass=true in profile "addons-446299"
	I0920 18:13:30.117804  749135 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-446299"
	I0920 18:13:30.117694  749135 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-446299"
	I0920 18:13:30.117828  749135 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-446299"
	I0920 18:13:30.117867  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117708  749135 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-446299"
	I0920 18:13:30.117998  749135 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-446299"
	I0920 18:13:30.117714  749135 addons.go:69] Setting inspektor-gadget=true in profile "addons-446299"
	I0920 18:13:30.118028  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.118044  749135 addons.go:234] Setting addon inspektor-gadget=true in "addons-446299"
	I0920 18:13:30.118082  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117716  749135 addons.go:69] Setting gcp-auth=true in profile "addons-446299"
	I0920 18:13:30.118200  749135 mustload.go:65] Loading cluster: addons-446299
	I0920 18:13:30.118199  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118219  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.117703  749135 addons.go:69] Setting ingress-dns=true in profile "addons-446299"
	I0920 18:13:30.118237  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118242  749135 addons.go:234] Setting addon ingress-dns=true in "addons-446299"
	I0920 18:13:30.118250  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118270  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.118376  749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:13:30.118380  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118401  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118492  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118530  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118647  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118678  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117720  749135 addons.go:69] Setting metrics-server=true in profile "addons-446299"
	I0920 18:13:30.118748  749135 addons.go:234] Setting addon metrics-server=true in "addons-446299"
	I0920 18:13:30.118777  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.118823  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118831  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118883  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118889  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117726  749135 addons.go:69] Setting ingress=true in profile "addons-446299"
	I0920 18:13:30.119096  749135 addons.go:234] Setting addon ingress=true in "addons-446299"
	I0920 18:13:30.119137  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117736  749135 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-446299"
	I0920 18:13:30.119353  749135 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-446299"
	I0920 18:13:30.119501  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.119521  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.119740  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.117735  749135 addons.go:69] Setting registry=true in profile "addons-446299"
	I0920 18:13:30.119761  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.119766  749135 addons.go:234] Setting addon registry=true in "addons-446299"
	I0920 18:13:30.119795  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.120169  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.120211  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117735  749135 addons.go:69] Setting cloud-spanner=true in profile "addons-446299"
	I0920 18:13:30.120247  749135 addons.go:234] Setting addon cloud-spanner=true in "addons-446299"
	I0920 18:13:30.117743  749135 addons.go:69] Setting volcano=true in profile "addons-446299"
	I0920 18:13:30.120264  749135 addons.go:234] Setting addon volcano=true in "addons-446299"
	I0920 18:13:30.120292  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.120352  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117744  749135 addons.go:69] Setting storage-provisioner=true in profile "addons-446299"
	I0920 18:13:30.120495  749135 addons.go:234] Setting addon storage-provisioner=true in "addons-446299"
	I0920 18:13:30.120536  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.120768  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.120790  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117753  749135 addons.go:69] Setting volumesnapshots=true in profile "addons-446299"
	I0920 18:13:30.120925  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.120933  749135 addons.go:234] Setting addon volumesnapshots=true in "addons-446299"
	I0920 18:13:30.120955  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.120966  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.122929  749135 out.go:177] * Verifying Kubernetes components...
	I0920 18:13:30.124310  749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:13:30.139606  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0920 18:13:30.139626  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43313
	I0920 18:13:30.139664  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
	I0920 18:13:30.139664  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0920 18:13:30.151212  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I0920 18:13:30.151245  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.151251  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34369
	I0920 18:13:30.151274  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.151393  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.151405  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.151438  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.151856  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.151891  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.152064  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152188  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152245  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152411  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152423  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.152487  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152534  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152664  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152678  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.152736  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.152850  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152861  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.152984  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152995  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.153048  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.153483  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.153515  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.154013  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0920 18:13:30.154291  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.154314  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.154382  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.154805  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.154867  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.155632  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.155794  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.155815  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.155882  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.156284  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.156326  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.159168  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.159296  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.159618  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.159652  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.159773  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.159808  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.160117  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.160143  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.160217  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.160647  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.161813  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.161856  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.164600  749135 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-446299"
	I0920 18:13:30.164649  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.165039  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.165072  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.176807  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I0920 18:13:30.177469  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.178091  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.178111  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.178583  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.179242  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.179271  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.185984  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43023
	I0920 18:13:30.186586  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.187123  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.187144  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.187554  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.188160  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.188203  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.193206  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0920 18:13:30.193417  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0920 18:13:30.193849  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.194099  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.194452  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.194471  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.194968  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.195118  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.195132  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.195349  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0920 18:13:30.195438  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.196077  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.196556  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.196580  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.197033  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.197694  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.197734  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.197960  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36057
	I0920 18:13:30.198500  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.198621  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.198726  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I0920 18:13:30.198876  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.199030  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.199369  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.199385  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.199416  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.199438  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.199710  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.200318  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.200362  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.200438  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.201288  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.201893  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.201916  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.203229  749135 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 18:13:30.204746  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 18:13:30.204766  749135 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 18:13:30.204788  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.206295  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.206675  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.207700  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I0920 18:13:30.208147  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.208668  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.208691  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.209400  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.209672  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.209714  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.210328  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.210357  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.210920  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.210948  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.211140  749135 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 18:13:30.211638  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.212145  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.212323  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.212494  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.212630  749135 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:13:30.212646  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 18:13:30.212664  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.213593  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0920 18:13:30.214660  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34213
	I0920 18:13:30.215405  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.215903  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.215924  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.216384  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.216437  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.216507  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.216537  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.216592  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0920 18:13:30.217041  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.217047  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.217305  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.217448  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.217585  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.218334  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.218356  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.218795  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.219018  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.219181  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0920 18:13:30.219880  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.219925  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.219979  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.220067  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.220460  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.220482  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.220702  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.220722  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.220787  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
	I0920 18:13:30.221095  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.221183  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.221329  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.221386  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:30.221397  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:30.223334  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.223352  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.223398  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:30.223412  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:30.223419  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:30.223427  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:30.223433  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:30.223529  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.224012  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:30.224041  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:30.224048  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 18:13:30.224154  749135 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 18:13:30.224543  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0920 18:13:30.225486  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.225509  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.226183  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.226202  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.226560  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 18:13:30.226986  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.227285  749135 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 18:13:30.227644  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.227684  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.228253  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34967
	I0920 18:13:30.228649  749135 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:13:30.228675  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 18:13:30.228697  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.229313  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
	I0920 18:13:30.229673  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.230049  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 18:13:30.230142  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.230158  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.230485  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.230672  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.231280  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.231806  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0920 18:13:30.231963  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.231988  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.232145  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.232332  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.232428  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.232440  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 18:13:30.232482  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.232696  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.233542  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.233796  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:13:30.234419  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.234438  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.234783  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 18:13:30.235010  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.235348  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.236127  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 18:13:30.236900  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0920 18:13:30.237440  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 18:13:30.237599  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0920 18:13:30.238719  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:13:30.239949  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 18:13:30.240129  749135 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:13:30.240146  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 18:13:30.240162  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.242347  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 18:13:30.243261  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.243644  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.243673  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.243908  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.244083  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.244194  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.244349  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.244407  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
	I0920 18:13:30.244610  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 18:13:30.245914  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 18:13:30.245941  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 18:13:30.245963  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.246673  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41943
	I0920 18:13:30.247429  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.247556  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.247990  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.248061  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.248074  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.248079  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.248343  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.248449  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.248449  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.248468  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.248596  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.248607  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.248648  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.248833  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.249170  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.249280  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.249352  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.249393  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.249409  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.250084  749135 addons.go:234] Setting addon default-storageclass=true in "addons-446299"
	I0920 18:13:30.250124  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.250508  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.250532  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.251170  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.251192  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.251274  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.251488  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.251857  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.251862  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.251910  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.251940  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.252078  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.252212  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.252224  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.252440  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.252553  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.252748  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.252820  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.252833  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.253735  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.253941  749135 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 18:13:30.254017  749135 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 18:13:30.253980  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.254455  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.254656  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.254870  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.254873  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.255177  749135 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:13:30.255187  749135 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 18:13:30.255205  749135 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 18:13:30.255226  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.255274  749135 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 18:13:30.255278  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.255288  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 18:13:30.255303  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.256466  749135 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 18:13:30.256532  749135 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:13:30.256552  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:13:30.256570  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.258154  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 18:13:30.259159  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 18:13:30.259174  749135 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 18:13:30.259188  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.259235  749135 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 18:13:30.260368  749135 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 18:13:30.260382  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 18:13:30.260394  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.260519  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.260844  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.260873  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.261038  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.261196  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.262948  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.263013  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.263033  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.263050  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.263161  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.263545  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.263701  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.264179  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.264417  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.264628  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.265340  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.265500  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.265732  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.265751  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.266060  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.266249  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.266266  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.266441  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.266593  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.266625  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.266670  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.266742  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.267063  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.267118  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.267232  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.267247  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.267357  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.267382  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.267549  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.267839  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.269511  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0920 18:13:30.269878  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.270901  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.270926  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.271296  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.271468  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.273221  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.274917  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0920 18:13:30.275136  749135 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 18:13:30.275446  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.276076  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.276096  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.276414  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:13:30.276440  749135 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:13:30.276461  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.276501  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.276736  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.278674  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.280057  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.280316  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.280342  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.280375  749135 out.go:177]   - Using image docker.io/busybox:stable
	I0920 18:13:30.280530  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.280706  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.280828  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.280961  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	W0920 18:13:30.281845  749135 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35600->192.168.39.237:22: read: connection reset by peer
	I0920 18:13:30.281937  749135 retry.go:31] will retry after 148.234221ms: ssh: handshake failed: read tcp 192.168.39.1:35600->192.168.39.237:22: read: connection reset by peer
	I0920 18:13:30.282766  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37633
	I0920 18:13:30.282794  749135 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 18:13:30.283193  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.283743  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.283764  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.284120  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.284286  749135 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:13:30.284302  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 18:13:30.284319  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.284696  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.284848  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.290962  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.290998  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.291015  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.291035  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.291443  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.291607  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.291761  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.301013  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0920 18:13:30.301540  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.302060  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.302090  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.302449  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.302621  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.303997  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.304220  749135 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:13:30.304236  749135 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:13:30.304256  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.307237  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.307715  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.307749  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.307899  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.308079  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.308237  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.308392  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.604495  749135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:13:30.604525  749135 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:13:30.661112  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 18:13:30.661146  749135 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 18:13:30.662437  749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 18:13:30.662469  749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 18:13:30.705589  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:13:30.750149  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 18:13:30.750187  749135 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 18:13:30.753172  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 18:13:30.755196  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:13:30.771513  749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 18:13:30.771540  749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 18:13:30.797810  749135 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 18:13:30.797835  749135 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 18:13:30.807101  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:13:30.868448  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:13:30.869944  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 18:13:30.869963  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 18:13:30.871146  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:13:30.896462  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:13:30.900930  749135 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 18:13:30.900959  749135 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 18:13:30.906831  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:13:30.906880  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 18:13:30.933744  749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 18:13:30.933774  749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 18:13:30.969038  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 18:13:30.969076  749135 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 18:13:31.000321  749135 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 18:13:31.000354  749135 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 18:13:31.182228  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 18:13:31.182256  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 18:13:31.198470  749135 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:13:31.198506  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 18:13:31.232002  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 18:13:31.232027  749135 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 18:13:31.241138  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:13:31.241162  749135 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:13:31.303359  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:13:31.303389  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 18:13:31.308659  749135 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 18:13:31.308686  749135 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 18:13:31.411918  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:13:31.444332  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 18:13:31.444368  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 18:13:31.517643  749135 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:13:31.517669  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 18:13:31.522528  749135 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 18:13:31.522555  749135 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 18:13:31.527932  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:13:31.527961  749135 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:13:31.598680  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:13:31.753266  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 18:13:31.753305  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 18:13:31.825090  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:13:31.868789  749135 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 18:13:31.868821  749135 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 18:13:31.871872  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:13:32.035165  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 18:13:32.035205  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 18:13:32.325034  749135 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 18:13:32.325068  749135 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 18:13:32.426301  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 18:13:32.426330  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 18:13:32.734227  749135 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:13:32.734252  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 18:13:32.776162  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 18:13:32.776201  749135 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 18:13:32.973816  749135 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.369238207s)
	I0920 18:13:32.973844  749135 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.369303036s)
	I0920 18:13:32.973868  749135 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 18:13:32.974717  749135 node_ready.go:35] waiting up to 6m0s for node "addons-446299" to be "Ready" ...
	I0920 18:13:32.978640  749135 node_ready.go:49] node "addons-446299" has status "Ready":"True"
	I0920 18:13:32.978660  749135 node_ready.go:38] duration metric: took 3.921107ms for node "addons-446299" to be "Ready" ...
	I0920 18:13:32.978672  749135 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:13:32.990987  749135 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:33.092955  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:13:33.125330  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 18:13:33.125357  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 18:13:33.271505  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 18:13:33.271534  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 18:13:33.497723  749135 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-446299" context rescaled to 1 replicas
	I0920 18:13:33.600812  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:13:33.600847  749135 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 18:13:33.656016  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.902807697s)
	I0920 18:13:33.656075  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656075  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.900839477s)
	I0920 18:13:33.656016  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.950386811s)
	I0920 18:13:33.656109  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656121  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656127  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656090  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656146  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656567  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.656587  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.656608  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.656624  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.656627  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.656653  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.656665  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656676  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656635  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.656718  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656637  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.656744  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.656760  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656767  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656730  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.657076  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.657118  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.657119  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.657096  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.657156  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.657263  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.657279  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.758218  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:13:35.015799  749135 pod_ready.go:103] pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:35.494820  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.687683083s)
	I0920 18:13:35.494889  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.494891  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.626405857s)
	I0920 18:13:35.494920  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.494932  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.494930  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.623755287s)
	I0920 18:13:35.494950  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.494983  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.495052  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.495370  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.495388  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.495396  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.495404  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.496899  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.496907  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.496907  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.496946  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.496958  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.496966  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.496977  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.496990  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.496999  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.497065  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.497077  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.497089  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.497098  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.497258  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.497276  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.498278  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.498290  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.498301  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.545445  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.545475  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.545718  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.545745  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.545752  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	W0920 18:13:35.545859  749135 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 18:13:35.559802  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.559831  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.560074  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.560092  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.560108  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:36.023603  749135 pod_ready.go:93] pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.023630  749135 pod_ready.go:82] duration metric: took 3.032619357s for pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.023643  749135 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.059659  749135 pod_ready.go:93] pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.059693  749135 pod_ready.go:82] duration metric: took 36.040161ms for pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.059705  749135 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.075393  749135 pod_ready.go:93] pod "etcd-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.075428  749135 pod_ready.go:82] duration metric: took 15.714418ms for pod "etcd-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.075441  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.089509  749135 pod_ready.go:93] pod "kube-apiserver-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.089536  749135 pod_ready.go:82] duration metric: took 14.086774ms for pod "kube-apiserver-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.089546  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.600534  749135 pod_ready.go:93] pod "kube-controller-manager-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.600565  749135 pod_ready.go:82] duration metric: took 511.011851ms for pod "kube-controller-manager-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.600579  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9pcgb" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.797080  749135 pod_ready.go:93] pod "kube-proxy-9pcgb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.797111  749135 pod_ready.go:82] duration metric: took 196.523175ms for pod "kube-proxy-9pcgb" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.797123  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:37.195153  749135 pod_ready.go:93] pod "kube-scheduler-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:37.195185  749135 pod_ready.go:82] duration metric: took 398.053895ms for pod "kube-scheduler-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:37.195198  749135 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:37.260708  749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 18:13:37.260749  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:37.264035  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.264543  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:37.264579  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.264739  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:37.264958  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:37.265141  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:37.265285  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:37.472764  749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 18:13:37.656998  749135 addons.go:234] Setting addon gcp-auth=true in "addons-446299"
	I0920 18:13:37.657072  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:37.657494  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:37.657545  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:37.673709  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0920 18:13:37.674398  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:37.674958  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:37.674981  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:37.675363  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:37.675843  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:37.675888  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:37.691444  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38543
	I0920 18:13:37.692042  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:37.692560  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:37.692593  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:37.693006  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:37.693249  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:37.695166  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:37.695451  749135 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 18:13:37.695481  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:37.698450  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.698921  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:37.698953  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.699128  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:37.699312  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:37.699441  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:37.699604  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:38.819493  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.922986564s)
	I0920 18:13:38.819541  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.407583803s)
	I0920 18:13:38.819575  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819591  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819607  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819648  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.220925429s)
	I0920 18:13:38.819598  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819686  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819705  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819778  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.994650356s)
	W0920 18:13:38.819815  749135 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:13:38.819840  749135 retry.go:31] will retry after 365.705658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:13:38.819845  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.947942371s)
	I0920 18:13:38.819873  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819885  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819961  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.726965652s)
	I0920 18:13:38.820001  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820012  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820227  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820244  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820285  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820295  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820413  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.820433  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.820460  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820467  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820475  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820481  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820629  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820639  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820647  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820655  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820718  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.820773  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820781  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820789  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820795  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.821299  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.821316  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.821349  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.821355  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.821365  749135 addons.go:475] Verifying addon registry=true in "addons-446299"
	I0920 18:13:38.821906  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.821917  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.821926  749135 addons.go:475] Verifying addon ingress=true in "addons-446299"
	I0920 18:13:38.821997  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822026  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.822038  749135 addons.go:475] Verifying addon metrics-server=true in "addons-446299"
	I0920 18:13:38.822070  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822084  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.822092  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.822100  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.822128  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822143  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.822495  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.822542  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822551  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.823406  749135 out.go:177] * Verifying ingress addon...
	I0920 18:13:38.823868  749135 out.go:177] * Verifying registry addon...
	I0920 18:13:38.824871  749135 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-446299 service yakd-dashboard -n yakd-dashboard
	
	I0920 18:13:38.825597  749135 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 18:13:38.826680  749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 18:13:38.844205  749135 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 18:13:38.844236  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:38.850356  749135 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:13:38.850383  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:39.186375  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:13:39.200878  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:39.330411  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:39.330769  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:39.849376  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:39.851690  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:40.361850  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:40.362230  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:41.034778  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:41.035000  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:41.038162  749135 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.342687523s)
	I0920 18:13:41.038403  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.280132041s)
	I0920 18:13:41.038461  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.038481  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.038819  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.038884  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.038905  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.038922  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.039163  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.039205  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.039225  749135 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-446299"
	I0920 18:13:41.039205  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:41.041287  749135 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 18:13:41.041290  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:13:41.043438  749135 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 18:13:41.044297  749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 18:13:41.044713  749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 18:13:41.044732  749135 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 18:13:41.101841  749135 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:13:41.101863  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:41.130328  749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 18:13:41.130361  749135 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 18:13:41.246926  749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:13:41.246950  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 18:13:41.330722  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:41.331217  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:41.367190  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:13:41.375612  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.189187999s)
	I0920 18:13:41.375679  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.375703  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.376082  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.376123  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.376131  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:41.376140  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.376180  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.376437  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.376461  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.376464  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:41.548363  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:41.701651  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:41.831758  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:41.831933  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:42.053967  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:42.331450  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:42.331860  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:42.559368  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:42.796101  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.428861154s)
	I0920 18:13:42.796164  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:42.796186  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:42.796539  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:42.796652  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:42.796628  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:42.796665  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:42.796674  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:42.796931  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:42.796948  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:42.796971  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:42.798018  749135 addons.go:475] Verifying addon gcp-auth=true in "addons-446299"
	I0920 18:13:42.799750  749135 out.go:177] * Verifying gcp-auth addon...
	I0920 18:13:42.801961  749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 18:13:42.813536  749135 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 18:13:42.813557  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:42.834100  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:42.834512  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:43.050004  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:43.305311  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:43.330407  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:43.331586  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:43.549945  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:43.702111  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:43.806287  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:43.830332  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:43.830560  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:44.050313  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:44.307181  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:44.332062  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:44.332579  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:44.549621  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:44.806074  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:44.830087  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:44.830821  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:45.049798  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:45.305355  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:45.329798  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:45.330472  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:45.549159  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:45.702368  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:45.805600  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:45.830331  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:45.831003  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:46.048681  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:46.476235  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:46.476881  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:46.477765  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:46.576766  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:46.805777  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:46.830583  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:46.831463  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:47.050496  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:47.307091  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:47.330512  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:47.331048  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:47.549305  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:47.805735  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:47.830215  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:47.831512  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:48.049902  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:48.202178  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:48.306243  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:48.329718  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:48.332280  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:48.550170  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:48.805429  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:48.829830  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:48.831490  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:49.050407  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:49.305950  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:49.331188  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:49.331284  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:49.549193  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:49.805377  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:49.831064  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:49.831335  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:50.050205  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:50.205469  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:50.306610  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:50.330226  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:50.331728  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:50.548853  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:50.806045  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:50.830924  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:50.831062  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:51.049036  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:51.305994  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:51.330295  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:51.330905  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:51.549433  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:51.805870  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:51.830479  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:51.831665  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:52.050500  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:52.305644  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:52.330460  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:52.330909  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:52.549056  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:52.700600  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:52.805458  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:52.829967  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:52.831274  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:53.049224  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:53.306145  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:53.330699  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:53.331032  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:53.548388  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:54.211235  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:54.211371  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:54.211581  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:54.212019  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:54.305931  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:54.332757  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:54.333316  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:54.550241  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:54.701439  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:54.805276  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:54.830616  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:54.831417  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:55.057083  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:55.305836  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:55.330687  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:55.331243  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:55.550673  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:55.701690  749135 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:55.701725  749135 pod_ready.go:82] duration metric: took 18.50651845s for pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:55.701734  749135 pod_ready.go:39] duration metric: took 22.723049339s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:13:55.701754  749135 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:13:55.701817  749135 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:13:55.736899  749135 api_server.go:72] duration metric: took 25.619420852s to wait for apiserver process to appear ...
	I0920 18:13:55.736929  749135 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:13:55.736952  749135 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0920 18:13:55.741901  749135 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0920 18:13:55.743609  749135 api_server.go:141] control plane version: v1.31.1
	I0920 18:13:55.743635  749135 api_server.go:131] duration metric: took 6.69997ms to wait for apiserver health ...
	I0920 18:13:55.743646  749135 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:13:55.757231  749135 system_pods.go:59] 17 kube-system pods found
	I0920 18:13:55.757585  749135 system_pods.go:61] "coredns-7c65d6cfc9-8b5fx" [226fc466-f0b5-4501-8879-b8b9b8d758ac] Running
	I0920 18:13:55.757615  749135 system_pods.go:61] "csi-hostpath-attacher-0" [b131974d-0f4b-4bc6-bec3-d4c797279aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 18:13:55.757633  749135 system_pods.go:61] "csi-hostpath-resizer-0" [684355d7-d68e-4357-8103-d8350a38ea37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 18:13:55.757647  749135 system_pods.go:61] "csi-hostpathplugin-fcmx5" [1576357c-2e2c-469a-b069-dcac225f49c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 18:13:55.757654  749135 system_pods.go:61] "etcd-addons-446299" [c82607ca-b677-4592-935a-a32dad76e79c] Running
	I0920 18:13:55.757662  749135 system_pods.go:61] "kube-apiserver-addons-446299" [93375989-de9f-4fea-afcc-44d35775ddd6] Running
	I0920 18:13:55.757668  749135 system_pods.go:61] "kube-controller-manager-addons-446299" [4c06855c-f18c-4df4-bd04-584c8594a744] Running
	I0920 18:13:55.757677  749135 system_pods.go:61] "kube-ingress-dns-minikube" [631849c1-f984-4e83-b07b-6b2ed4eb0697] Running
	I0920 18:13:55.757682  749135 system_pods.go:61] "kube-proxy-9pcgb" [934faade-c115-4ced-9bb6-c22a2fe014f2] Running
	I0920 18:13:55.757689  749135 system_pods.go:61] "kube-scheduler-addons-446299" [ce4ce9a3-dd64-47ed-a920-b6c5359c80a7] Running
	I0920 18:13:55.757697  749135 system_pods.go:61] "metrics-server-84c5f94fbc-dgfgh" [84513540-b090-4d24-b6e0-9ed764434018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:13:55.757705  749135 system_pods.go:61] "nvidia-device-plugin-daemonset-6l2l2" [c6db8268-e330-413b-9107-88c63f861e42] Running
	I0920 18:13:55.757714  749135 system_pods.go:61] "registry-66c9cd494c-vxc6t" [10b4cecb-c85b-45ef-8043-e88a81971d51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 18:13:55.757725  749135 system_pods.go:61] "registry-proxy-bqdmf" [11ab987d-a80f-412a-8a15-03a5898a2e9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 18:13:55.757738  749135 system_pods.go:61] "snapshot-controller-56fcc65765-4qwlb" [d4cd83fc-a074-4317-9b02-22010ae0ca66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.757750  749135 system_pods.go:61] "snapshot-controller-56fcc65765-8rk95" [63d1f200-a587-488c-82d3-bf38586a6fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.757759  749135 system_pods.go:61] "storage-provisioner" [0e9e378d-208e-46e0-a2be-70f96e59408a] Running
	I0920 18:13:55.757770  749135 system_pods.go:74] duration metric: took 14.117036ms to wait for pod list to return data ...
	I0920 18:13:55.757782  749135 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:13:55.762579  749135 default_sa.go:45] found service account: "default"
	I0920 18:13:55.762610  749135 default_sa.go:55] duration metric: took 4.817698ms for default service account to be created ...
	I0920 18:13:55.762622  749135 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:13:55.772780  749135 system_pods.go:86] 17 kube-system pods found
	I0920 18:13:55.772808  749135 system_pods.go:89] "coredns-7c65d6cfc9-8b5fx" [226fc466-f0b5-4501-8879-b8b9b8d758ac] Running
	I0920 18:13:55.772816  749135 system_pods.go:89] "csi-hostpath-attacher-0" [b131974d-0f4b-4bc6-bec3-d4c797279aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 18:13:55.772822  749135 system_pods.go:89] "csi-hostpath-resizer-0" [684355d7-d68e-4357-8103-d8350a38ea37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 18:13:55.772830  749135 system_pods.go:89] "csi-hostpathplugin-fcmx5" [1576357c-2e2c-469a-b069-dcac225f49c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 18:13:55.772834  749135 system_pods.go:89] "etcd-addons-446299" [c82607ca-b677-4592-935a-a32dad76e79c] Running
	I0920 18:13:55.772839  749135 system_pods.go:89] "kube-apiserver-addons-446299" [93375989-de9f-4fea-afcc-44d35775ddd6] Running
	I0920 18:13:55.772842  749135 system_pods.go:89] "kube-controller-manager-addons-446299" [4c06855c-f18c-4df4-bd04-584c8594a744] Running
	I0920 18:13:55.772847  749135 system_pods.go:89] "kube-ingress-dns-minikube" [631849c1-f984-4e83-b07b-6b2ed4eb0697] Running
	I0920 18:13:55.772851  749135 system_pods.go:89] "kube-proxy-9pcgb" [934faade-c115-4ced-9bb6-c22a2fe014f2] Running
	I0920 18:13:55.772856  749135 system_pods.go:89] "kube-scheduler-addons-446299" [ce4ce9a3-dd64-47ed-a920-b6c5359c80a7] Running
	I0920 18:13:55.772865  749135 system_pods.go:89] "metrics-server-84c5f94fbc-dgfgh" [84513540-b090-4d24-b6e0-9ed764434018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:13:55.772922  749135 system_pods.go:89] "nvidia-device-plugin-daemonset-6l2l2" [c6db8268-e330-413b-9107-88c63f861e42] Running
	I0920 18:13:55.772931  749135 system_pods.go:89] "registry-66c9cd494c-vxc6t" [10b4cecb-c85b-45ef-8043-e88a81971d51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 18:13:55.772936  749135 system_pods.go:89] "registry-proxy-bqdmf" [11ab987d-a80f-412a-8a15-03a5898a2e9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 18:13:55.772946  749135 system_pods.go:89] "snapshot-controller-56fcc65765-4qwlb" [d4cd83fc-a074-4317-9b02-22010ae0ca66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.772953  749135 system_pods.go:89] "snapshot-controller-56fcc65765-8rk95" [63d1f200-a587-488c-82d3-bf38586a6fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.772957  749135 system_pods.go:89] "storage-provisioner" [0e9e378d-208e-46e0-a2be-70f96e59408a] Running
	I0920 18:13:55.772963  749135 system_pods.go:126] duration metric: took 10.336403ms to wait for k8s-apps to be running ...
	I0920 18:13:55.772972  749135 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:13:55.773018  749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:13:55.793348  749135 system_svc.go:56] duration metric: took 20.361414ms WaitForService to wait for kubelet
	I0920 18:13:55.793389  749135 kubeadm.go:582] duration metric: took 25.675912921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:13:55.793417  749135 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:13:55.802544  749135 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:13:55.802600  749135 node_conditions.go:123] node cpu capacity is 2
	I0920 18:13:55.802617  749135 node_conditions.go:105] duration metric: took 9.193115ms to run NodePressure ...
	I0920 18:13:55.802639  749135 start.go:241] waiting for startup goroutines ...
	I0920 18:13:55.807268  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:55.834016  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:55.834628  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:56.049150  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:56.305873  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:56.331424  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:56.331798  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:56.550328  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:56.806065  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:56.829659  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:56.830161  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:57.049081  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:57.306075  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:57.329355  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:57.330540  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:57.549591  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:57.805900  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:57.830374  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:57.832330  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:58.049092  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:58.306271  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:58.329770  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:58.331160  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:58.922331  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:58.923063  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:58.923163  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:58.924173  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:59.050995  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:59.306609  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:59.410277  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:59.410618  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:59.549349  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:59.806119  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:59.829906  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:59.830124  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:00.049161  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:00.306487  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:00.330117  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:00.331103  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:00.549561  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:00.806760  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:00.831148  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:00.831297  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:01.050001  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:01.306298  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:01.407860  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:01.408083  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:01.548728  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:01.806320  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:01.830021  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:01.830689  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:02.048991  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:02.305521  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:02.330400  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:02.331175  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:02.549048  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:02.805598  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:02.830127  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:02.830327  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:03.049629  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:03.305858  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:03.331322  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:03.331679  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:03.548558  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:03.820166  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:03.830589  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:03.832021  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:04.465452  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:04.465905  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:04.465965  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:04.466066  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:04.565162  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:04.805221  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:04.830427  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:04.830573  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:05.050021  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:05.305449  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:05.330307  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:05.331288  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:05.549216  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:05.805952  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:05.830822  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:05.830882  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:06.048888  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:06.305947  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:06.330556  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:06.330915  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:06.549018  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:06.806964  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:06.841818  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:06.843261  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:07.048576  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:07.305982  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:07.330357  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:07.330437  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:07.549676  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:07.813909  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:07.830340  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:07.830795  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:08.050020  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:08.306364  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:08.330678  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:08.332935  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:08.548619  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:08.805004  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:08.830441  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:08.831560  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:09.332291  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:09.333139  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:09.333782  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:09.335034  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:09.549087  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:09.805906  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:09.829949  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:09.830348  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:10.049303  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:10.306098  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:10.329817  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:10.330883  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:10.549227  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:10.951479  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:10.951670  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:10.951904  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:11.048505  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:11.306899  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:11.330827  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:11.331176  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:11.549848  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:11.805719  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:11.830262  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:11.830606  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:12.059649  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:12.305971  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:12.329961  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:12.330563  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:12.549966  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:12.804939  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:12.829214  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:12.830837  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:13.048395  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:13.305641  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:13.331438  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:13.331605  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:13.549421  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:13.805919  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:13.831661  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:13.831730  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:14.049399  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:14.306300  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:14.329818  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:14.330774  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:14.552222  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:14.806365  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:14.829698  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:14.831887  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:15.048953  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:15.305618  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:15.330650  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:15.330943  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:15.548777  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:15.806132  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:15.830944  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:15.831352  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:16.052172  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:16.306342  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:16.329653  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:16.330883  749135 kapi.go:107] duration metric: took 37.504199599s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 18:14:16.548598  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:16.805754  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:16.830184  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:17.049843  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:17.383048  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:17.383735  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:17.550278  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:17.806058  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:17.829341  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:18.051596  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:18.306388  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:18.334664  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:18.552534  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:18.806897  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:18.830308  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:19.050045  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:19.306131  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:19.329862  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:19.550696  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:19.807045  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:19.829977  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:20.048666  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:20.306256  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:20.329911  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:20.550226  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:20.806144  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:20.830855  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:21.049583  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:21.310640  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:21.412808  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:21.549653  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:21.805953  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:21.829404  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:22.049850  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:22.315829  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:22.331862  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:22.549120  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:22.806085  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:22.829986  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:23.049654  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:23.306266  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:23.330058  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:23.560251  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:23.807013  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:23.830715  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:24.049404  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:24.306201  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:24.330512  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:24.595031  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:24.806293  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:24.907159  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:25.048965  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:25.305513  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:25.331059  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:25.549920  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:25.805287  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:25.830246  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:26.048992  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:26.306656  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:26.329987  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:26.549698  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:26.808992  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:26.829741  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:27.052649  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:27.312773  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:27.331951  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:27.562526  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:27.805604  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:27.830050  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:28.067172  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:28.306333  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:28.330924  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:28.550567  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:28.807713  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:28.836265  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:29.049440  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:29.305994  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:29.329628  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:29.551265  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:29.807081  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:29.829169  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:30.051607  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:30.308200  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:30.331298  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:30.553108  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:30.822844  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:30.831353  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:31.049853  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:31.305139  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:31.329419  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:31.549350  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:31.806142  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:31.829483  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:32.053013  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:32.306129  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:32.330537  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:32.771680  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:32.806908  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:32.831303  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:33.050163  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:33.305068  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:33.330437  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:33.548440  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:33.806177  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:33.830995  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:34.049496  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:34.310365  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:34.329994  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:34.548907  749135 kapi.go:107] duration metric: took 53.50460724s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 18:14:34.805871  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:34.830222  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:35.306762  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:35.330726  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:35.806453  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:35.830187  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:36.305548  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:36.330510  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:36.806443  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:36.829844  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:37.306287  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:37.330018  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:37.806187  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:37.829944  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:38.306428  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:38.330700  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:38.806275  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:38.830764  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:39.305577  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:39.330471  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:39.806014  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:39.829683  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:40.306572  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:40.329962  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:40.806663  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:40.830402  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:41.305985  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:41.329856  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:41.807066  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:41.829842  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:42.305779  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:42.330575  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:42.805256  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:42.829665  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:43.305345  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:43.329924  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:43.805970  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:43.829619  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:44.305067  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:44.330110  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:44.807165  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:44.832428  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:45.307073  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:45.329430  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:45.807239  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:45.829759  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:46.305795  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:46.330660  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:46.807307  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:46.829950  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:47.306710  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:47.330054  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:47.806495  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:47.830576  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:48.305615  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:48.330601  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:48.805326  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:48.829994  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:49.306221  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:49.330067  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:49.807517  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:49.831847  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:50.312486  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:50.412022  749135 kapi.go:107] duration metric: took 1m11.586419635s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 18:14:50.805525  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:51.306784  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:51.919819  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:52.306451  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:52.809242  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:53.318752  749135 kapi.go:107] duration metric: took 1m10.516788064s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 18:14:53.320395  749135 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-446299 cluster.
	I0920 18:14:53.321854  749135 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 18:14:53.323252  749135 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 18:14:53.324985  749135 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 18:14:53.326283  749135 addons.go:510] duration metric: took 1m23.208765269s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 18:14:53.326342  749135 start.go:246] waiting for cluster config update ...
	I0920 18:14:53.326365  749135 start.go:255] writing updated cluster config ...
	I0920 18:14:53.326710  749135 ssh_runner.go:195] Run: rm -f paused
	I0920 18:14:53.387365  749135 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:14:53.389186  749135 out.go:177] * Done! kubectl is now configured to use "addons-446299" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.161037157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856652161011535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c366805b-8fc5-449b-8a34-0eb221c7c5a9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.161492548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aec5cff0-8ac5-4c3a-8da4-904baed10a0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.161789555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aec5cff0-8ac5-4c3a-8da4-904baed10a0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.162350790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844,PodSandboxId:dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726856057582231326,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-dgfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84513540-b090-4d24-b6e0-9ed764434018,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de,PodSandboxId:34301f7252ea6eae961095d9413f9fdd3ef14ea8253d18e0da80e4ed2b715059,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726856055896888672,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bqdmf,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ab987d-a80f-412a-8a15-03a5898a2e9e,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,
CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3,PodSandboxId:63f0d2722ba276dd3b36e061448a39004477c837ce53a11da2279149998eaf3a,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb37596
16a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726856041231263633,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-vxc6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b4cecb-c85b-45ef-8043-e88a81971d51,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{
Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc4
8af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a
7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandbox
Id:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf2182
5fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f21
68ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856000233156133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&Con
tainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aec5cff0-8ac5-4c3a-8da4-904baed10a0c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.205837527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=907a9f02-f0df-4dd8-ae4e-3239499b76bf name=/runtime.v1.RuntimeService/Version
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.205931540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=907a9f02-f0df-4dd8-ae4e-3239499b76bf name=/runtime.v1.RuntimeService/Version
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.207904691Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce22261b-3160-4de8-9c74-bfebf86d864d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.209248139Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856652209214846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce22261b-3160-4de8-9c74-bfebf86d864d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.209973165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb47e070-a12f-4c44-a30d-58c927d97261 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.210046076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb47e070-a12f-4c44-a30d-58c927d97261 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.210566655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844,PodSandboxId:dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726856057582231326,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-dgfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84513540-b090-4d24-b6e0-9ed764434018,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de,PodSandboxId:34301f7252ea6eae961095d9413f9fdd3ef14ea8253d18e0da80e4ed2b715059,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726856055896888672,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bqdmf,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ab987d-a80f-412a-8a15-03a5898a2e9e,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,
CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3,PodSandboxId:63f0d2722ba276dd3b36e061448a39004477c837ce53a11da2279149998eaf3a,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb37596
16a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726856041231263633,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-vxc6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b4cecb-c85b-45ef-8043-e88a81971d51,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{
Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc4
8af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a
7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandbox
Id:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf2182
5fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f21
68ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856000233156133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&Con
tainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb47e070-a12f-4c44-a30d-58c927d97261 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.251300631Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb1108cc-2fe6-4e14-933c-f89daa517176 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.251393675Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb1108cc-2fe6-4e14-933c-f89daa517176 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.253095856Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8001f8ff-df77-48fc-9a11-02a6c84ace44 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.254136859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856652254108980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8001f8ff-df77-48fc-9a11-02a6c84ace44 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.254673857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6faea495-80e8-41f4-89e2-b8c4aa56f8de name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.254791771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6faea495-80e8-41f4-89e2-b8c4aa56f8de name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.255291250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844,PodSandboxId:dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726856057582231326,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-dgfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84513540-b090-4d24-b6e0-9ed764434018,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de,PodSandboxId:34301f7252ea6eae961095d9413f9fdd3ef14ea8253d18e0da80e4ed2b715059,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726856055896888672,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bqdmf,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ab987d-a80f-412a-8a15-03a5898a2e9e,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,
CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3,PodSandboxId:63f0d2722ba276dd3b36e061448a39004477c837ce53a11da2279149998eaf3a,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb37596
16a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726856041231263633,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-vxc6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b4cecb-c85b-45ef-8043-e88a81971d51,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{
Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc4
8af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a
7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandbox
Id:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf2182
5fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f21
68ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856000233156133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&Con
tainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6faea495-80e8-41f4-89e2-b8c4aa56f8de name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.294206005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20f698b6-288e-4e17-b668-2eeceff0183f name=/runtime.v1.RuntimeService/Version
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.294282207Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20f698b6-288e-4e17-b668-2eeceff0183f name=/runtime.v1.RuntimeService/Version
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.295357162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc09133c-7298-4721-be20-21ba4944e6c9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.297137865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856652297112557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc09133c-7298-4721-be20-21ba4944e6c9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.297977866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16eee780-ea42-4060-88b6-35af41558f76 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.298039927Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16eee780-ea42-4060-88b6-35af41558f76 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.299047350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844,PodSandboxId:dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726856057582231326,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-dgfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84513540-b090-4d24-b6e0-9ed764434018,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de,PodSandboxId:34301f7252ea6eae961095d9413f9fdd3ef14ea8253d18e0da80e4ed2b715059,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726856055896888672,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bqdmf,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ab987d-a80f-412a-8a15-03a5898a2e9e,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,
CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3,PodSandboxId:63f0d2722ba276dd3b36e061448a39004477c837ce53a11da2279149998eaf3a,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb37596
16a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726856041231263633,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-vxc6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b4cecb-c85b-45ef-8043-e88a81971d51,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{
Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc4
8af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a
7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandbox
Id:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf2182
5fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f21
68ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856000233156133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&Con
tainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16eee780-ea42-4060-88b6-35af41558f76 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	7c4b9c3a7c539       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 9 minutes ago       Running             gcp-auth                                 0                   efe0ec0dcbcc2       gcp-auth-89d5ffd79-9scf7
	ba7dc5faa58b7       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             9 minutes ago       Running             controller                               0                   75840320e5280       ingress-nginx-controller-bc57996ff-8kt58
	b094e7c30c796       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	bed98529d363a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          9 minutes ago       Running             csi-provisioner                          0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	69da68d150b2a       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            9 minutes ago       Running             liveness-probe                           0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	fd9ca7a3ca987       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           9 minutes ago       Running             hostpath                                 0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	5a2b6759c0bf9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                9 minutes ago       Running             node-driver-registrar                    0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	66723f0443fe2       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              9 minutes ago       Running             csi-resizer                              0                   00b4d98c29779       csi-hostpath-resizer-0
	c917700eb7747       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             9 minutes ago       Running             csi-attacher                             0                   3ffd6a03ee490       csi-hostpath-attacher-0
	509b6bbf231a9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   9 minutes ago       Running             csi-external-health-monitor-controller   0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	e86a2c89e146b       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             9 minutes ago       Exited              patch                                    1                   a24f9a7c28487       ingress-nginx-admission-patch-2mwr8
	bf44e059a196a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   9 minutes ago       Exited              create                                   0                   1938162f16084       ingress-nginx-admission-create-sdwls
	33f5bce9e468f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      9 minutes ago       Running             volume-snapshot-controller               0                   46ab05da30745       snapshot-controller-56fcc65765-4qwlb
	cbf9321604592       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      9 minutes ago       Running             volume-snapshot-controller               0                   f64e4538489ab       snapshot-controller-56fcc65765-8rk95
	3c3b736165a00       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        9 minutes ago       Running             metrics-server                           0                   dd8942402304f       metrics-server-84c5f94fbc-dgfgh
	b425ff4f976af       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             10 minutes ago      Running             local-path-provisioner                   0                   a0bef6fd3ee4b       local-path-provisioner-86d989889c-tvbgx
	68195d8abd2e3       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             10 minutes ago      Running             minikube-ingress-dns                     0                   50aa8158427c9       kube-ingress-dns-minikube
	123e17c57dc2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             10 minutes ago      Running             storage-provisioner                      0                   2de8a3616c782       storage-provisioner
	d52dc29cba22a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             10 minutes ago      Running             coredns                                  0                   a7fdf4add17f8       coredns-7c65d6cfc9-8b5fx
	371fb9f89e965       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             10 minutes ago      Running             kube-proxy                               0                   5aa37b64d2a9c       kube-proxy-9pcgb
	730952f4127d6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             10 minutes ago      Running             kube-apiserver                           0                   403b403cdf218       kube-apiserver-addons-446299
	e9e7734f58847       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             10 minutes ago      Running             kube-scheduler                           0                   4306bc0f35baa       kube-scheduler-addons-446299
	a8af18aadd9a1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             10 minutes ago      Running             kube-controller-manager                  0                   859cc747f1c82       kube-controller-manager-addons-446299
	402ab000bdb93       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             10 minutes ago      Running             etcd                                     0                   17de22cbd91b4       etcd-addons-446299
	
	
	==> coredns [d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a] <==
	[INFO] 127.0.0.1:45092 - 31226 "HINFO IN 8537533385009167611.1098357581305743543. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017946303s
	[INFO] 10.244.0.7:50895 - 60070 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000864499s
	[INFO] 10.244.0.7:50895 - 30883 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.004754851s
	[INFO] 10.244.0.7:60479 - 45291 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000276551s
	[INFO] 10.244.0.7:60479 - 60648 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000259587s
	[INFO] 10.244.0.7:34337 - 50221 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103649s
	[INFO] 10.244.0.7:34337 - 3119 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000190818s
	[INFO] 10.244.0.7:50579 - 48699 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149541s
	[INFO] 10.244.0.7:50579 - 13882 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00029954s
	[INFO] 10.244.0.7:52674 - 19194 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100903s
	[INFO] 10.244.0.7:52674 - 48616 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131897s
	[INFO] 10.244.0.7:34842 - 24908 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052174s
	[INFO] 10.244.0.7:34842 - 17742 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131345s
	[INFO] 10.244.0.7:58542 - 36156 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047177s
	[INFO] 10.244.0.7:58542 - 62014 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148973s
	[INFO] 10.244.0.7:34082 - 14251 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000145316s
	[INFO] 10.244.0.7:34082 - 45485 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000238133s
	[INFO] 10.244.0.21:56997 - 31030 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000537673s
	[INFO] 10.244.0.21:35720 - 34441 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147988s
	[INFO] 10.244.0.21:53795 - 23425 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001554s
	[INFO] 10.244.0.21:58869 - 385 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122258s
	[INFO] 10.244.0.21:37326 - 35127 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00023415s
	[INFO] 10.244.0.21:35448 - 47752 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126595s
	[INFO] 10.244.0.21:41454 - 25870 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003639103s
	[INFO] 10.244.0.21:51708 - 51164 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00402176s
	
	
	==> describe nodes <==
	Name:               addons-446299
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-446299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-446299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_13_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-446299
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-446299"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:13:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-446299
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:24:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:23:27 +0000   Fri, 20 Sep 2024 18:13:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:23:27 +0000   Fri, 20 Sep 2024 18:13:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:23:27 +0000   Fri, 20 Sep 2024 18:13:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:23:27 +0000   Fri, 20 Sep 2024 18:13:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    addons-446299
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b51819720d24a4988f4faf5cbed4e8f
	  System UUID:                6b518197-20d2-4a49-88f4-faf5cbed4e8f
	  Boot ID:                    431228fc-f5a8-4282-bf7e-10c36798659f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  gcp-auth                    gcp-auth-89d5ffd79-9scf7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8kt58    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-8b5fx                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpathplugin-fcmx5                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-addons-446299                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-446299                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-446299       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-9pcgb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-446299                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-dgfgh             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 snapshot-controller-56fcc65765-4qwlb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-56fcc65765-8rk95        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          local-path-provisioner-86d989889c-tvbgx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-446299 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-446299 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-446299 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-446299 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-446299 event: Registered Node addons-446299 in Controller
	
	
	==> dmesg <==
	[  +0.086501] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.305303] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
	[  +0.141616] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.046436] kauditd_printk_skb: 135 callbacks suppressed
	[  +5.120665] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.997269] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.458196] kauditd_printk_skb: 5 callbacks suppressed
	[Sep20 18:14] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.706525] kauditd_printk_skb: 34 callbacks suppressed
	[ +16.244583] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.135040] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.940354] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.767745] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.007018] kauditd_printk_skb: 48 callbacks suppressed
	[Sep20 18:15] kauditd_printk_skb: 10 callbacks suppressed
	[Sep20 18:16] kauditd_printk_skb: 30 callbacks suppressed
	[Sep20 18:17] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:20] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:22] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:23] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.877503] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.382620] kauditd_printk_skb: 41 callbacks suppressed
	[  +8.681981] kauditd_printk_skb: 39 callbacks suppressed
	[ +13.570039] kauditd_printk_skb: 14 callbacks suppressed
	[Sep20 18:24] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551] <==
	{"level":"warn","ts":"2024-09-20T18:14:32.753190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.800719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:32.753227Z","caller":"traceutil/trace.go:171","msg":"trace[543841858] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1058; }","duration":"350.836502ms","start":"2024-09-20T18:14:32.402385Z","end":"2024-09-20T18:14:32.753221Z","steps":["trace[543841858] 'agreement among raft nodes before linearized reading'  (duration: 350.779906ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:32.753246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:14:32.402356Z","time spent":"350.885838ms","remote":"127.0.0.1:36780","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-20T18:14:32.753338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.730876ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:32.753372Z","caller":"traceutil/trace.go:171","msg":"trace[1542998802] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1058; }","duration":"340.769961ms","start":"2024-09-20T18:14:32.412597Z","end":"2024-09-20T18:14:32.753367Z","steps":["trace[1542998802] 'agreement among raft nodes before linearized reading'  (duration: 340.724283ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:32.753846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.265355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:32.753903Z","caller":"traceutil/trace.go:171","msg":"trace[581069886] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1058; }","duration":"217.327931ms","start":"2024-09-20T18:14:32.536567Z","end":"2024-09-20T18:14:32.753895Z","steps":["trace[581069886] 'agreement among raft nodes before linearized reading'  (duration: 217.246138ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:51.903628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.538818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-20T18:14:51.904065Z","caller":"traceutil/trace.go:171","msg":"trace[2043860769] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1117; }","duration":"144.082045ms","start":"2024-09-20T18:14:51.759954Z","end":"2024-09-20T18:14:51.904036Z","steps":["trace[2043860769] 'count revisions from in-memory index tree'  (duration: 143.478073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:51.904831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.923374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:51.904891Z","caller":"traceutil/trace.go:171","msg":"trace[386261722] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"111.005288ms","start":"2024-09-20T18:14:51.793876Z","end":"2024-09-20T18:14:51.904881Z","steps":["trace[386261722] 'range keys from in-memory index tree'  (duration: 110.882796ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:23:04.403949Z","caller":"traceutil/trace.go:171","msg":"trace[1232773900] linearizableReadLoop","detail":"{readStateIndex:2064; appliedIndex:2063; }","duration":"137.955638ms","start":"2024-09-20T18:23:04.265959Z","end":"2024-09-20T18:23:04.403914Z","steps":["trace[1232773900] 'read index received'  (duration: 137.83631ms)","trace[1232773900] 'applied index is now lower than readState.Index'  (duration: 118.922µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:23:04.404190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.160514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:23:04.404218Z","caller":"traceutil/trace.go:171","msg":"trace[1586547199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1925; }","duration":"138.254725ms","start":"2024-09-20T18:23:04.265955Z","end":"2024-09-20T18:23:04.404210Z","steps":["trace[1586547199] 'agreement among raft nodes before linearized reading'  (duration: 138.105756ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:23:04.404422Z","caller":"traceutil/trace.go:171","msg":"trace[700372140] transaction","detail":"{read_only:false; response_revision:1925; number_of_response:1; }","duration":"379.764994ms","start":"2024-09-20T18:23:04.024645Z","end":"2024-09-20T18:23:04.404410Z","steps":["trace[700372140] 'process raft request'  (duration: 379.19458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:23:04.404517Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:23:04.024622Z","time spent":"379.814521ms","remote":"127.0.0.1:36928","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1924 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-20T18:23:21.256394Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1506}
	{"level":"info","ts":"2024-09-20T18:23:21.288238Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1506,"took":"31.314726ms","hash":517065302,"current-db-size-bytes":7016448,"current-db-size":"7.0 MB","current-db-size-in-use-bytes":4055040,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2024-09-20T18:23:21.288299Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":517065302,"revision":1506,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T18:23:22.430993Z","caller":"traceutil/trace.go:171","msg":"trace[200479020] transaction","detail":"{read_only:false; response_revision:2108; number_of_response:1; }","duration":"314.888557ms","start":"2024-09-20T18:23:22.116093Z","end":"2024-09-20T18:23:22.430981Z","steps":["trace[200479020] 'process raft request'  (duration: 314.552392ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:23:22.431107Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:23:22.116078Z","time spent":"314.951125ms","remote":"127.0.0.1:37058","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2038 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:425 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-09-20T18:23:23.865254Z","caller":"traceutil/trace.go:171","msg":"trace[102178879] linearizableReadLoop","detail":"{readStateIndex:2258; appliedIndex:2257; }","duration":"203.488059ms","start":"2024-09-20T18:23:23.661753Z","end":"2024-09-20T18:23:23.865241Z","steps":["trace[102178879] 'read index received'  (duration: 203.347953ms)","trace[102178879] 'applied index is now lower than readState.Index'  (duration: 139.623µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:23:23.865357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.585815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:23:23.865380Z","caller":"traceutil/trace.go:171","msg":"trace[1945616439] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2110; }","duration":"203.624964ms","start":"2024-09-20T18:23:23.661749Z","end":"2024-09-20T18:23:23.865374Z","steps":["trace[1945616439] 'agreement among raft nodes before linearized reading'  (duration: 203.546895ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:23:23.865639Z","caller":"traceutil/trace.go:171","msg":"trace[1429413700] transaction","detail":"{read_only:false; response_revision:2110; number_of_response:1; }","duration":"210.845365ms","start":"2024-09-20T18:23:23.654785Z","end":"2024-09-20T18:23:23.865631Z","steps":["trace[1429413700] 'process raft request'  (duration: 210.352466ms)"],"step_count":1}
	
	
	==> gcp-auth [7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228] <==
	2024/09/20 18:14:53 Ready to write response ...
	2024/09/20 18:14:55 Ready to marshal response ...
	2024/09/20 18:14:55 Ready to write response ...
	2024/09/20 18:14:55 Ready to marshal response ...
	2024/09/20 18:14:55 Ready to write response ...
	2024/09/20 18:22:59 Ready to marshal response ...
	2024/09/20 18:22:59 Ready to write response ...
	2024/09/20 18:22:59 Ready to marshal response ...
	2024/09/20 18:22:59 Ready to write response ...
	2024/09/20 18:22:59 Ready to marshal response ...
	2024/09/20 18:22:59 Ready to write response ...
	2024/09/20 18:23:05 Ready to marshal response ...
	2024/09/20 18:23:05 Ready to write response ...
	2024/09/20 18:23:05 Ready to marshal response ...
	2024/09/20 18:23:05 Ready to write response ...
	2024/09/20 18:23:10 Ready to marshal response ...
	2024/09/20 18:23:10 Ready to write response ...
	2024/09/20 18:23:15 Ready to marshal response ...
	2024/09/20 18:23:15 Ready to write response ...
	2024/09/20 18:23:18 Ready to marshal response ...
	2024/09/20 18:23:18 Ready to write response ...
	2024/09/20 18:23:29 Ready to marshal response ...
	2024/09/20 18:23:29 Ready to write response ...
	2024/09/20 18:23:37 Ready to marshal response ...
	2024/09/20 18:23:37 Ready to write response ...
	
	
	==> kernel <==
	 18:24:12 up 11 min,  0 users,  load average: 0.30, 0.37, 0.32
	Linux addons-446299 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 18:15:27.823202       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:15:27.823313       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 18:15:27.823420       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:15:27.823588       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:15:27.824490       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:15:27.825326       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0920 18:15:31.828151       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.147.48:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.147.48:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	W0920 18:15:31.828390       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:15:31.828450       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:15:31.847786       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0920 18:15:31.853561       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0920 18:22:59.185908       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.29.221"}
	I0920 18:23:23.918494       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 18:23:25.009930       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 18:23:29.482103       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 18:23:29.675487       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.190.241"}
	I0920 18:23:30.728395       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e] <==
	I0920 18:23:05.584802       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="139.5µs"
	I0920 18:23:05.648638       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="24.833813ms"
	I0920 18:23:05.649442       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="34.713µs"
	I0920 18:23:12.832142       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="3.409µs"
	I0920 18:23:15.306992       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0920 18:23:15.981923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="7.686µs"
	I0920 18:23:22.948309       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E0920 18:23:25.011628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:23:26.072079       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:23:26.072150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:23:27.852981       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-446299"
	W0920 18:23:28.821603       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:23:28.821738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:23:29.815035       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0920 18:23:29.815092       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:23:30.342351       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0920 18:23:30.342391       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:23:34.046159       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0920 18:23:34.852782       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:23:34.852837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:23:45.509339       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:23:45.509390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:24:03.134228       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:24:03.134359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:24:11.155220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.255µs"
	
	
	==> kube-proxy [371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:13:32.095684       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:13:32.111185       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.237"]
	E0920 18:13:32.111246       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:13:32.254832       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:13:32.254884       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:13:32.254908       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:13:32.262039       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:13:32.262450       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:13:32.262484       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:13:32.268397       1 config.go:199] "Starting service config controller"
	I0920 18:13:32.268443       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:13:32.268473       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:13:32.268477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:13:32.268988       1 config.go:328] "Starting node config controller"
	I0920 18:13:32.268994       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:13:32.368877       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:13:32.368886       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:13:32.369073       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072] <==
	W0920 18:13:22.809246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 18:13:22.809282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.809585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:13:22.809621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.813253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:13:22.813298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.813377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:13:22.813413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.813464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:13:22.813478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.815129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:13:22.815174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.637031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 18:13:23.637068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.746262       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:13:23.746361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.943434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:13:23.943536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.956043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:13:23.956129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.968884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:13:23.969017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:24.340405       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:13:24.340516       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 18:13:27.096843       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:23:49 addons-446299 kubelet[1199]: E0920 18:23:49.166554    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
	Sep 20 18:23:55 addons-446299 kubelet[1199]: E0920 18:23:55.504765    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856635504269792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:23:55 addons-446299 kubelet[1199]: E0920 18:23:55.505037    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856635504269792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:24:01 addons-446299 kubelet[1199]: E0920 18:24:01.344000    1199 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 20 18:24:01 addons-446299 kubelet[1199]: E0920 18:24:01.344419    1199 kuberuntime_image.go:55] "Failed to pull image" err="copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 20 18:24:01 addons-446299 kubelet[1199]: E0920 18:24:01.345259    1199 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8zg4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:
,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_default(e00699c2-7689-43aa-9a79-f6b8682fbe91): ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 20 18:24:01 addons-446299 kubelet[1199]: E0920 18:24:01.348855    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e00699c2-7689-43aa-9a79-f6b8682fbe91"
	Sep 20 18:24:02 addons-446299 kubelet[1199]: E0920 18:24:02.283634    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="e00699c2-7689-43aa-9a79-f6b8682fbe91"
	Sep 20 18:24:04 addons-446299 kubelet[1199]: E0920 18:24:04.166489    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
	Sep 20 18:24:05 addons-446299 kubelet[1199]: E0920 18:24:05.507516    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856645506796293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:24:05 addons-446299 kubelet[1199]: E0920 18:24:05.508092    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856645506796293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:24:11 addons-446299 kubelet[1199]: I0920 18:24:11.653223    1199 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqgbp\" (UniqueName: \"kubernetes.io/projected/11ab987d-a80f-412a-8a15-03a5898a2e9e-kube-api-access-zqgbp\") pod \"11ab987d-a80f-412a-8a15-03a5898a2e9e\" (UID: \"11ab987d-a80f-412a-8a15-03a5898a2e9e\") "
	Sep 20 18:24:11 addons-446299 kubelet[1199]: I0920 18:24:11.653316    1199 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvc9z\" (UniqueName: \"kubernetes.io/projected/10b4cecb-c85b-45ef-8043-e88a81971d51-kube-api-access-nvc9z\") pod \"10b4cecb-c85b-45ef-8043-e88a81971d51\" (UID: \"10b4cecb-c85b-45ef-8043-e88a81971d51\") "
	Sep 20 18:24:11 addons-446299 kubelet[1199]: I0920 18:24:11.656121    1199 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b4cecb-c85b-45ef-8043-e88a81971d51-kube-api-access-nvc9z" (OuterVolumeSpecName: "kube-api-access-nvc9z") pod "10b4cecb-c85b-45ef-8043-e88a81971d51" (UID: "10b4cecb-c85b-45ef-8043-e88a81971d51"). InnerVolumeSpecName "kube-api-access-nvc9z". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:24:11 addons-446299 kubelet[1199]: I0920 18:24:11.656658    1199 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ab987d-a80f-412a-8a15-03a5898a2e9e-kube-api-access-zqgbp" (OuterVolumeSpecName: "kube-api-access-zqgbp") pod "11ab987d-a80f-412a-8a15-03a5898a2e9e" (UID: "11ab987d-a80f-412a-8a15-03a5898a2e9e"). InnerVolumeSpecName "kube-api-access-zqgbp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:24:11 addons-446299 kubelet[1199]: I0920 18:24:11.754519    1199 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nvc9z\" (UniqueName: \"kubernetes.io/projected/10b4cecb-c85b-45ef-8043-e88a81971d51-kube-api-access-nvc9z\") on node \"addons-446299\" DevicePath \"\""
	Sep 20 18:24:11 addons-446299 kubelet[1199]: I0920 18:24:11.754562    1199 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zqgbp\" (UniqueName: \"kubernetes.io/projected/11ab987d-a80f-412a-8a15-03a5898a2e9e-kube-api-access-zqgbp\") on node \"addons-446299\" DevicePath \"\""
	Sep 20 18:24:12 addons-446299 kubelet[1199]: I0920 18:24:12.347684    1199 scope.go:117] "RemoveContainer" containerID="5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3"
	Sep 20 18:24:12 addons-446299 kubelet[1199]: I0920 18:24:12.390802    1199 scope.go:117] "RemoveContainer" containerID="5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3"
	Sep 20 18:24:12 addons-446299 kubelet[1199]: E0920 18:24:12.396527    1199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3\": container with ID starting with 5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3 not found: ID does not exist" containerID="5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3"
	Sep 20 18:24:12 addons-446299 kubelet[1199]: I0920 18:24:12.396652    1199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3"} err="failed to get container status \"5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3\": rpc error: code = NotFound desc = could not find container \"5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3\": container with ID starting with 5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3 not found: ID does not exist"
	Sep 20 18:24:12 addons-446299 kubelet[1199]: I0920 18:24:12.396753    1199 scope.go:117] "RemoveContainer" containerID="c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de"
	Sep 20 18:24:12 addons-446299 kubelet[1199]: I0920 18:24:12.443568    1199 scope.go:117] "RemoveContainer" containerID="c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de"
	Sep 20 18:24:12 addons-446299 kubelet[1199]: E0920 18:24:12.444359    1199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de\": container with ID starting with c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de not found: ID does not exist" containerID="c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de"
	Sep 20 18:24:12 addons-446299 kubelet[1199]: I0920 18:24:12.444396    1199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de"} err="failed to get container status \"c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de\": rpc error: code = NotFound desc = could not find container \"c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de\": container with ID starting with c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de not found: ID does not exist"
	
	
	==> storage-provisioner [123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0] <==
	I0920 18:13:37.673799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:13:37.889195       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:13:37.889268       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:13:37.991169       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:13:37.991374       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8!
	I0920 18:13:37.992328       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e2a2b2a-26e5-43f5-ad91-442df4e21dfd", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8 became leader
	I0920 18:13:38.191750       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-446299 -n addons-446299
helpers_test.go:261: (dbg) Run:  kubectl --context addons-446299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx registry-test task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-446299 describe pod busybox nginx registry-test task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-446299 describe pod busybox nginx registry-test task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8: exit status 1 (91.465123ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-446299/192.168.39.237
	Start Time:       Fri, 20 Sep 2024 18:14:55 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6l6f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s6l6f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m18s                  default-scheduler  Successfully assigned default/busybox to addons-446299
	  Normal   Pulling    7m52s (x4 over 9m17s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m52s (x4 over 9m17s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m52s (x4 over 9m17s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m28s (x6 over 9m16s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x20 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-446299/192.168.39.237
	Start Time:       Fri, 20 Sep 2024 18:23:29 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zg4g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8zg4g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age               From               Message
	  ----     ------     ----              ----               -------
	  Normal   Scheduled  44s               default-scheduler  Successfully assigned default/nginx to addons-446299
	  Warning  Failed     12s               kubelet            Failed to pull image "docker.io/nginx:alpine": copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     12s               kubelet            Error: ErrImagePull
	  Normal   BackOff    11s               kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     11s               kubelet            Error: ImagePullBackOff
	  Normal   Pulling    0s (x2 over 43s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:                      registry-test
	Namespace:                 default
	Priority:                  0
	Service Account:           default
	Node:                      addons-446299/192.168.39.237
	Start Time:                Fri, 20 Sep 2024 18:23:10 +0000
	Labels:                    run=registry-test
	Annotations:               <none>
	Status:                    Terminating (lasts <invalid>)
	Termination Grace Period:  30s
	IP:                        10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  registry-test:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Args:
	      sh
	      -c
	      wget --spider -S http://registry.kube-system.svc.cluster.local
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zlk52 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zlk52:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  63s                default-scheduler  Successfully assigned default/registry-test to addons-446299
	  Warning  Failed     47s (x2 over 62s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     47s (x2 over 62s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    34s (x2 over 62s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox"
	  Warning  Failed     34s (x2 over 62s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    21s (x3 over 62s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox"
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-446299/192.168.39.237
	Start Time:       Fri, 20 Sep 2024 18:23:37 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zzgp9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-zzgp9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  36s   default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-446299
	  Normal  Pulling    35s   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sdwls" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2mwr8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-446299 describe pod busybox nginx registry-test task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.19s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (482.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-446299 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-446299 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-446299 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e00699c2-7689-43aa-9a79-f6b8682fbe91] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:248: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-446299 -n addons-446299
addons_test.go:248: TestAddons/parallel/Ingress: showing logs for failed pods as of 2024-09-20 18:31:29.972601256 +0000 UTC m=+1141.818245860
addons_test.go:248: (dbg) Run:  kubectl --context addons-446299 describe po nginx -n default
addons_test.go:248: (dbg) kubectl --context addons-446299 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-446299/192.168.39.237
Start Time:       Fri, 20 Sep 2024 18:23:29 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zg4g (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8zg4g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m1s                   default-scheduler  Successfully assigned default/nginx to addons-446299
Warning  Failed     7m29s                  kubelet            Failed to pull image "docker.io/nginx:alpine": copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     4m56s (x2 over 6m28s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    4m6s (x4 over 8m)      kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     3m21s (x4 over 7m29s)  kubelet            Error: ErrImagePull
Warning  Failed     3m21s                  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    2m56s (x7 over 7m28s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m56s (x7 over 7m28s)  kubelet            Error: ImagePullBackOff
addons_test.go:248: (dbg) Run:  kubectl --context addons-446299 logs nginx -n default
addons_test.go:248: (dbg) Non-zero exit: kubectl --context addons-446299 logs nginx -n default: exit status 1 (67.646558ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:248: kubectl --context addons-446299 logs nginx -n default: exit status 1
addons_test.go:249: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-446299 -n addons-446299
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-446299 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-446299 logs -n 25: (1.329285933s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | -p download-only-675466                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-675466                                                                     | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| start   | -o=json --download-only                                                                     | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | -p download-only-363869                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-363869                                                                     | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-675466                                                                     | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-363869                                                                     | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-747965 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | binary-mirror-747965                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39359                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-747965                                                                     | binary-mirror-747965 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| addons  | enable dashboard -p                                                                         | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-446299 --wait=true                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:22 UTC | 20 Sep 24 18:22 UTC |
	|         | -p addons-446299                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | -p addons-446299                                                                            |                      |         |         |                     |                     |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-446299 ssh cat                                                                       | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | /opt/local-path-provisioner/pvc-11168afa-d97c-4581-90a8-f19b354e2c35_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| ip      | addons-446299 ip                                                                            | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:24 UTC | 20 Sep 24 18:24 UTC |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:24 UTC | 20 Sep 24 18:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-446299 addons                                                                        | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:12:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:12:45.452837  749135 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:12:45.452957  749135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:45.452966  749135 out.go:358] Setting ErrFile to fd 2...
	I0920 18:12:45.452970  749135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:45.453156  749135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:12:45.453777  749135 out.go:352] Setting JSON to false
	I0920 18:12:45.454793  749135 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6915,"bootTime":1726849050,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:12:45.454907  749135 start.go:139] virtualization: kvm guest
	I0920 18:12:45.457071  749135 out.go:177] * [addons-446299] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:12:45.458344  749135 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:12:45.458335  749135 notify.go:220] Checking for updates...
	I0920 18:12:45.459761  749135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:12:45.461106  749135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:12:45.462449  749135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:45.463737  749135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:12:45.465084  749135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:12:45.466379  749135 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:12:45.497434  749135 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:12:45.498519  749135 start.go:297] selected driver: kvm2
	I0920 18:12:45.498542  749135 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:12:45.498561  749135 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:12:45.499322  749135 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:12:45.499411  749135 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:12:45.513921  749135 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:12:45.513966  749135 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:12:45.514272  749135 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:12:45.514314  749135 cni.go:84] Creating CNI manager for ""
	I0920 18:12:45.514372  749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:12:45.514386  749135 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:12:45.514458  749135 start.go:340] cluster config:
	{Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:12:45.514600  749135 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:12:45.516315  749135 out.go:177] * Starting "addons-446299" primary control-plane node in "addons-446299" cluster
	I0920 18:12:45.517423  749135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:12:45.517447  749135 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:12:45.517459  749135 cache.go:56] Caching tarball of preloaded images
	I0920 18:12:45.517543  749135 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:12:45.517552  749135 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:12:45.517857  749135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json ...
	I0920 18:12:45.517880  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json: {Name:mkaa7e3a2b8a2d95cecdc721e4fd7f5310773e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:12:45.518032  749135 start.go:360] acquireMachinesLock for addons-446299: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:12:45.518095  749135 start.go:364] duration metric: took 46.763µs to acquireMachinesLock for "addons-446299"
	I0920 18:12:45.518131  749135 start.go:93] Provisioning new machine with config: &{Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:12:45.518195  749135 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:12:45.520537  749135 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 18:12:45.520688  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:12:45.520727  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:45.535639  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0920 18:12:45.536170  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:45.536786  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:12:45.536808  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:45.537162  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:45.537383  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:12:45.537540  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:12:45.537694  749135 start.go:159] libmachine.API.Create for "addons-446299" (driver="kvm2")
	I0920 18:12:45.537726  749135 client.go:168] LocalClient.Create starting
	I0920 18:12:45.537791  749135 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:12:45.635672  749135 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:12:45.854167  749135 main.go:141] libmachine: Running pre-create checks...
	I0920 18:12:45.854195  749135 main.go:141] libmachine: (addons-446299) Calling .PreCreateCheck
	I0920 18:12:45.854768  749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
	I0920 18:12:45.855238  749135 main.go:141] libmachine: Creating machine...
	I0920 18:12:45.855256  749135 main.go:141] libmachine: (addons-446299) Calling .Create
	I0920 18:12:45.855444  749135 main.go:141] libmachine: (addons-446299) Creating KVM machine...
	I0920 18:12:45.856800  749135 main.go:141] libmachine: (addons-446299) DBG | found existing default KVM network
	I0920 18:12:45.857584  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:45.857437  749157 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bb0}
	I0920 18:12:45.857661  749135 main.go:141] libmachine: (addons-446299) DBG | created network xml: 
	I0920 18:12:45.857685  749135 main.go:141] libmachine: (addons-446299) DBG | <network>
	I0920 18:12:45.857700  749135 main.go:141] libmachine: (addons-446299) DBG |   <name>mk-addons-446299</name>
	I0920 18:12:45.857710  749135 main.go:141] libmachine: (addons-446299) DBG |   <dns enable='no'/>
	I0920 18:12:45.857722  749135 main.go:141] libmachine: (addons-446299) DBG |   
	I0920 18:12:45.857736  749135 main.go:141] libmachine: (addons-446299) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:12:45.857749  749135 main.go:141] libmachine: (addons-446299) DBG |     <dhcp>
	I0920 18:12:45.857762  749135 main.go:141] libmachine: (addons-446299) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:12:45.857774  749135 main.go:141] libmachine: (addons-446299) DBG |     </dhcp>
	I0920 18:12:45.857784  749135 main.go:141] libmachine: (addons-446299) DBG |   </ip>
	I0920 18:12:45.857795  749135 main.go:141] libmachine: (addons-446299) DBG |   
	I0920 18:12:45.857805  749135 main.go:141] libmachine: (addons-446299) DBG | </network>
	I0920 18:12:45.857817  749135 main.go:141] libmachine: (addons-446299) DBG | 
	I0920 18:12:45.862810  749135 main.go:141] libmachine: (addons-446299) DBG | trying to create private KVM network mk-addons-446299 192.168.39.0/24...
	I0920 18:12:45.928127  749135 main.go:141] libmachine: (addons-446299) DBG | private KVM network mk-addons-446299 192.168.39.0/24 created
	I0920 18:12:45.928216  749135 main.go:141] libmachine: (addons-446299) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 ...
	I0920 18:12:45.928243  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:45.928106  749157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:45.928255  749135 main.go:141] libmachine: (addons-446299) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:12:45.928282  749135 main.go:141] libmachine: (addons-446299) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:12:46.198371  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.198204  749157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa...
	I0920 18:12:46.306630  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.306482  749157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/addons-446299.rawdisk...
	I0920 18:12:46.306662  749135 main.go:141] libmachine: (addons-446299) DBG | Writing magic tar header
	I0920 18:12:46.306673  749135 main.go:141] libmachine: (addons-446299) DBG | Writing SSH key tar header
	I0920 18:12:46.306681  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.306605  749157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 ...
	I0920 18:12:46.306695  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299
	I0920 18:12:46.306758  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 (perms=drwx------)
	I0920 18:12:46.306798  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:12:46.306816  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:12:46.306825  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:12:46.306839  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:46.306872  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:12:46.306884  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:12:46.306904  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:12:46.306929  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:12:46.306939  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:12:46.306952  749135 main.go:141] libmachine: (addons-446299) Creating domain...
	I0920 18:12:46.306963  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:12:46.306969  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home
	I0920 18:12:46.306976  749135 main.go:141] libmachine: (addons-446299) DBG | Skipping /home - not owner
	I0920 18:12:46.308063  749135 main.go:141] libmachine: (addons-446299) define libvirt domain using xml: 
	I0920 18:12:46.308090  749135 main.go:141] libmachine: (addons-446299) <domain type='kvm'>
	I0920 18:12:46.308100  749135 main.go:141] libmachine: (addons-446299)   <name>addons-446299</name>
	I0920 18:12:46.308107  749135 main.go:141] libmachine: (addons-446299)   <memory unit='MiB'>4000</memory>
	I0920 18:12:46.308114  749135 main.go:141] libmachine: (addons-446299)   <vcpu>2</vcpu>
	I0920 18:12:46.308128  749135 main.go:141] libmachine: (addons-446299)   <features>
	I0920 18:12:46.308136  749135 main.go:141] libmachine: (addons-446299)     <acpi/>
	I0920 18:12:46.308144  749135 main.go:141] libmachine: (addons-446299)     <apic/>
	I0920 18:12:46.308150  749135 main.go:141] libmachine: (addons-446299)     <pae/>
	I0920 18:12:46.308156  749135 main.go:141] libmachine: (addons-446299)     
	I0920 18:12:46.308161  749135 main.go:141] libmachine: (addons-446299)   </features>
	I0920 18:12:46.308167  749135 main.go:141] libmachine: (addons-446299)   <cpu mode='host-passthrough'>
	I0920 18:12:46.308172  749135 main.go:141] libmachine: (addons-446299)   
	I0920 18:12:46.308184  749135 main.go:141] libmachine: (addons-446299)   </cpu>
	I0920 18:12:46.308194  749135 main.go:141] libmachine: (addons-446299)   <os>
	I0920 18:12:46.308203  749135 main.go:141] libmachine: (addons-446299)     <type>hvm</type>
	I0920 18:12:46.308221  749135 main.go:141] libmachine: (addons-446299)     <boot dev='cdrom'/>
	I0920 18:12:46.308234  749135 main.go:141] libmachine: (addons-446299)     <boot dev='hd'/>
	I0920 18:12:46.308243  749135 main.go:141] libmachine: (addons-446299)     <bootmenu enable='no'/>
	I0920 18:12:46.308250  749135 main.go:141] libmachine: (addons-446299)   </os>
	I0920 18:12:46.308255  749135 main.go:141] libmachine: (addons-446299)   <devices>
	I0920 18:12:46.308262  749135 main.go:141] libmachine: (addons-446299)     <disk type='file' device='cdrom'>
	I0920 18:12:46.308277  749135 main.go:141] libmachine: (addons-446299)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/boot2docker.iso'/>
	I0920 18:12:46.308290  749135 main.go:141] libmachine: (addons-446299)       <target dev='hdc' bus='scsi'/>
	I0920 18:12:46.308302  749135 main.go:141] libmachine: (addons-446299)       <readonly/>
	I0920 18:12:46.308312  749135 main.go:141] libmachine: (addons-446299)     </disk>
	I0920 18:12:46.308324  749135 main.go:141] libmachine: (addons-446299)     <disk type='file' device='disk'>
	I0920 18:12:46.308335  749135 main.go:141] libmachine: (addons-446299)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:12:46.308350  749135 main.go:141] libmachine: (addons-446299)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/addons-446299.rawdisk'/>
	I0920 18:12:46.308364  749135 main.go:141] libmachine: (addons-446299)       <target dev='hda' bus='virtio'/>
	I0920 18:12:46.308376  749135 main.go:141] libmachine: (addons-446299)     </disk>
	I0920 18:12:46.308386  749135 main.go:141] libmachine: (addons-446299)     <interface type='network'>
	I0920 18:12:46.308395  749135 main.go:141] libmachine: (addons-446299)       <source network='mk-addons-446299'/>
	I0920 18:12:46.308404  749135 main.go:141] libmachine: (addons-446299)       <model type='virtio'/>
	I0920 18:12:46.308414  749135 main.go:141] libmachine: (addons-446299)     </interface>
	I0920 18:12:46.308424  749135 main.go:141] libmachine: (addons-446299)     <interface type='network'>
	I0920 18:12:46.308440  749135 main.go:141] libmachine: (addons-446299)       <source network='default'/>
	I0920 18:12:46.308454  749135 main.go:141] libmachine: (addons-446299)       <model type='virtio'/>
	I0920 18:12:46.308462  749135 main.go:141] libmachine: (addons-446299)     </interface>
	I0920 18:12:46.308467  749135 main.go:141] libmachine: (addons-446299)     <serial type='pty'>
	I0920 18:12:46.308472  749135 main.go:141] libmachine: (addons-446299)       <target port='0'/>
	I0920 18:12:46.308478  749135 main.go:141] libmachine: (addons-446299)     </serial>
	I0920 18:12:46.308486  749135 main.go:141] libmachine: (addons-446299)     <console type='pty'>
	I0920 18:12:46.308493  749135 main.go:141] libmachine: (addons-446299)       <target type='serial' port='0'/>
	I0920 18:12:46.308498  749135 main.go:141] libmachine: (addons-446299)     </console>
	I0920 18:12:46.308504  749135 main.go:141] libmachine: (addons-446299)     <rng model='virtio'>
	I0920 18:12:46.308512  749135 main.go:141] libmachine: (addons-446299)       <backend model='random'>/dev/random</backend>
	I0920 18:12:46.308518  749135 main.go:141] libmachine: (addons-446299)     </rng>
	I0920 18:12:46.308522  749135 main.go:141] libmachine: (addons-446299)     
	I0920 18:12:46.308528  749135 main.go:141] libmachine: (addons-446299)     
	I0920 18:12:46.308544  749135 main.go:141] libmachine: (addons-446299)   </devices>
	I0920 18:12:46.308556  749135 main.go:141] libmachine: (addons-446299) </domain>
	I0920 18:12:46.308574  749135 main.go:141] libmachine: (addons-446299) 
	I0920 18:12:46.314191  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:13:6e:16 in network default
	I0920 18:12:46.314696  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:46.314712  749135 main.go:141] libmachine: (addons-446299) Ensuring networks are active...
	I0920 18:12:46.315254  749135 main.go:141] libmachine: (addons-446299) Ensuring network default is active
	I0920 18:12:46.315494  749135 main.go:141] libmachine: (addons-446299) Ensuring network mk-addons-446299 is active
	I0920 18:12:46.315890  749135 main.go:141] libmachine: (addons-446299) Getting domain xml...
	I0920 18:12:46.316428  749135 main.go:141] libmachine: (addons-446299) Creating domain...
	I0920 18:12:47.702575  749135 main.go:141] libmachine: (addons-446299) Waiting to get IP...
	I0920 18:12:47.703586  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:47.704120  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:47.704148  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:47.704086  749157 retry.go:31] will retry after 271.659022ms: waiting for machine to come up
	I0920 18:12:47.977759  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:47.978244  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:47.978271  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:47.978199  749157 retry.go:31] will retry after 286.269777ms: waiting for machine to come up
	I0920 18:12:48.265706  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:48.266154  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:48.266176  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:48.266104  749157 retry.go:31] will retry after 302.528012ms: waiting for machine to come up
	I0920 18:12:48.570875  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:48.571362  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:48.571386  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:48.571312  749157 retry.go:31] will retry after 579.846713ms: waiting for machine to come up
	I0920 18:12:49.153045  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:49.153478  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:49.153506  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:49.153418  749157 retry.go:31] will retry after 501.770816ms: waiting for machine to come up
	I0920 18:12:49.657032  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:49.657383  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:49.657410  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:49.657355  749157 retry.go:31] will retry after 903.967154ms: waiting for machine to come up
	I0920 18:12:50.562781  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:50.563350  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:50.563375  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:50.563286  749157 retry.go:31] will retry after 1.03177474s: waiting for machine to come up
	I0920 18:12:51.596424  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:51.596850  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:51.596971  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:51.596890  749157 retry.go:31] will retry after 1.278733336s: waiting for machine to come up
	I0920 18:12:52.877368  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:52.877732  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:52.877761  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:52.877690  749157 retry.go:31] will retry after 1.241144447s: waiting for machine to come up
	I0920 18:12:54.121228  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:54.121598  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:54.121623  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:54.121564  749157 retry.go:31] will retry after 2.253509148s: waiting for machine to come up
	I0920 18:12:56.377139  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:56.377598  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:56.377630  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:56.377537  749157 retry.go:31] will retry after 2.563830681s: waiting for machine to come up
	I0920 18:12:58.944264  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:58.944679  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:58.944723  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:58.944624  749157 retry.go:31] will retry after 2.392098661s: waiting for machine to come up
	I0920 18:13:01.339634  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:01.340032  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:13:01.340088  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:13:01.339990  749157 retry.go:31] will retry after 2.800869076s: waiting for machine to come up
	I0920 18:13:04.142006  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:04.142476  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:13:04.142500  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:13:04.142411  749157 retry.go:31] will retry after 4.101773144s: waiting for machine to come up
	I0920 18:13:08.247401  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.247831  749135 main.go:141] libmachine: (addons-446299) Found IP for machine: 192.168.39.237
	I0920 18:13:08.247867  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has current primary IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.247875  749135 main.go:141] libmachine: (addons-446299) Reserving static IP address...
	I0920 18:13:08.248197  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find host DHCP lease matching {name: "addons-446299", mac: "52:54:00:33:9c:3e", ip: "192.168.39.237"} in network mk-addons-446299
	I0920 18:13:08.320366  749135 main.go:141] libmachine: (addons-446299) DBG | Getting to WaitForSSH function...
	I0920 18:13:08.320400  749135 main.go:141] libmachine: (addons-446299) Reserved static IP address: 192.168.39.237
	I0920 18:13:08.320413  749135 main.go:141] libmachine: (addons-446299) Waiting for SSH to be available...
	I0920 18:13:08.323450  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.323840  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.323876  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.324043  749135 main.go:141] libmachine: (addons-446299) DBG | Using SSH client type: external
	I0920 18:13:08.324075  749135 main.go:141] libmachine: (addons-446299) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa (-rw-------)
	I0920 18:13:08.324116  749135 main.go:141] libmachine: (addons-446299) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.237 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:13:08.324134  749135 main.go:141] libmachine: (addons-446299) DBG | About to run SSH command:
	I0920 18:13:08.324145  749135 main.go:141] libmachine: (addons-446299) DBG | exit 0
	I0920 18:13:08.447247  749135 main.go:141] libmachine: (addons-446299) DBG | SSH cmd err, output: <nil>: 
	I0920 18:13:08.447526  749135 main.go:141] libmachine: (addons-446299) KVM machine creation complete!
	I0920 18:13:08.447847  749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
	I0920 18:13:08.448509  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:08.448699  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:08.448836  749135 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:13:08.448855  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:08.450187  749135 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:13:08.450200  749135 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:13:08.450206  749135 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:13:08.450212  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.452411  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.452723  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.452751  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.452850  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.453019  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.453174  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.453318  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.453492  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.453697  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.453711  749135 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:13:08.550007  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:13:08.550034  749135 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:13:08.550043  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.552709  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.553024  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.553055  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.553193  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.553387  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.553523  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.553628  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.553820  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.554035  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.554048  749135 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:13:08.651415  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:13:08.651508  749135 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:13:08.651519  749135 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:13:08.651527  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:13:08.651799  749135 buildroot.go:166] provisioning hostname "addons-446299"
	I0920 18:13:08.651833  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:13:08.652051  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.654630  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.654993  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.655016  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.655142  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.655325  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.655472  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.655580  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.655728  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.655930  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.655944  749135 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-446299 && echo "addons-446299" | sudo tee /etc/hostname
	I0920 18:13:08.764545  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-446299
	
	I0920 18:13:08.764579  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.767492  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.767918  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.767944  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.768198  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.768402  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.768591  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.768737  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.768929  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.769151  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.769174  749135 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-446299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-446299/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-446299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:13:08.875844  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:13:08.875886  749135 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:13:08.875933  749135 buildroot.go:174] setting up certificates
	I0920 18:13:08.875949  749135 provision.go:84] configureAuth start
	I0920 18:13:08.875963  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:13:08.876262  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:08.878744  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.879098  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.879119  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.879270  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.881403  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.881836  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.881865  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.881970  749135 provision.go:143] copyHostCerts
	I0920 18:13:08.882095  749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:13:08.882283  749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:13:08.882377  749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:13:08.882472  749135 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.addons-446299 san=[127.0.0.1 192.168.39.237 addons-446299 localhost minikube]
	I0920 18:13:09.208189  749135 provision.go:177] copyRemoteCerts
	I0920 18:13:09.208279  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:13:09.208315  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.211040  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.211327  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.211351  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.211544  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.211780  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.211947  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.212123  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.297180  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:13:09.320798  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:13:09.344012  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:13:09.366859  749135 provision.go:87] duration metric: took 490.878212ms to configureAuth
	I0920 18:13:09.366893  749135 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:13:09.367101  749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:13:09.367184  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.369576  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.369868  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.369896  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.370087  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.370268  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.370416  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.370568  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.370692  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:09.370898  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:09.370918  749135 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:13:09.580901  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:13:09.580930  749135 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:13:09.580938  749135 main.go:141] libmachine: (addons-446299) Calling .GetURL
	I0920 18:13:09.582415  749135 main.go:141] libmachine: (addons-446299) DBG | Using libvirt version 6000000
	I0920 18:13:09.584573  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.584892  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.584919  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.585053  749135 main.go:141] libmachine: Docker is up and running!
	I0920 18:13:09.585065  749135 main.go:141] libmachine: Reticulating splines...
	I0920 18:13:09.585073  749135 client.go:171] duration metric: took 24.047336599s to LocalClient.Create
	I0920 18:13:09.585100  749135 start.go:167] duration metric: took 24.047408021s to libmachine.API.Create "addons-446299"
	I0920 18:13:09.585116  749135 start.go:293] postStartSetup for "addons-446299" (driver="kvm2")
	I0920 18:13:09.585129  749135 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:13:09.585147  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.585408  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:13:09.585435  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.587350  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.587666  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.587695  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.587795  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.587993  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.588132  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.588235  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.664940  749135 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:13:09.669300  749135 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:13:09.669326  749135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:13:09.669399  749135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:13:09.669426  749135 start.go:296] duration metric: took 84.302482ms for postStartSetup
	I0920 18:13:09.669464  749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
	I0920 18:13:09.670097  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:09.672635  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.673027  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.673059  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.673292  749135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json ...
	I0920 18:13:09.673507  749135 start.go:128] duration metric: took 24.155298051s to createHost
	I0920 18:13:09.673535  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.675782  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.676085  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.676118  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.676239  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.676425  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.676577  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.676704  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.676850  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:09.677016  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:09.677026  749135 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:13:09.775435  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855989.751621835
	
	I0920 18:13:09.775464  749135 fix.go:216] guest clock: 1726855989.751621835
	I0920 18:13:09.775474  749135 fix.go:229] Guest: 2024-09-20 18:13:09.751621835 +0000 UTC Remote: 2024-09-20 18:13:09.673520947 +0000 UTC m=+24.255782208 (delta=78.100888ms)
	I0920 18:13:09.775526  749135 fix.go:200] guest clock delta is within tolerance: 78.100888ms
	I0920 18:13:09.775540  749135 start.go:83] releasing machines lock for "addons-446299", held for 24.257428579s
	I0920 18:13:09.775567  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.775862  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:09.778659  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.779012  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.779037  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.779220  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.779691  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.779841  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.779938  749135 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:13:09.779984  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.780090  749135 ssh_runner.go:195] Run: cat /version.json
	I0920 18:13:09.780115  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.782348  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.782682  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.782703  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.782721  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.782827  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.783033  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.783120  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.783141  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.783235  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.783325  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.783381  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.783467  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.783589  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.783728  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.855541  749135 ssh_runner.go:195] Run: systemctl --version
	I0920 18:13:09.885114  749135 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:13:10.038473  749135 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:13:10.044604  749135 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:13:10.044673  749135 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:13:10.061773  749135 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:13:10.061802  749135 start.go:495] detecting cgroup driver to use...
	I0920 18:13:10.061871  749135 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:13:10.078163  749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:13:10.092123  749135 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:13:10.092186  749135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:13:10.105354  749135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:13:10.118581  749135 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:13:10.228500  749135 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:13:10.385243  749135 docker.go:233] disabling docker service ...
	I0920 18:13:10.385317  749135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:13:10.399346  749135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:13:10.411799  749135 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:13:10.532538  749135 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:13:10.657590  749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:13:10.672417  749135 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:13:10.690910  749135 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:13:10.690989  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.701918  749135 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:13:10.702004  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.712909  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.723847  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.734707  749135 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:13:10.745859  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.756720  749135 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.781698  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.792301  749135 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:13:10.801512  749135 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:13:10.801614  749135 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:13:10.815061  749135 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:13:10.824568  749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:13:10.942263  749135 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:13:11.344964  749135 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:13:11.345085  749135 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:13:11.350594  749135 start.go:563] Will wait 60s for crictl version
	I0920 18:13:11.350677  749135 ssh_runner.go:195] Run: which crictl
	I0920 18:13:11.354600  749135 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:13:11.392003  749135 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:13:11.392112  749135 ssh_runner.go:195] Run: crio --version
	I0920 18:13:11.424468  749135 ssh_runner.go:195] Run: crio --version
	I0920 18:13:11.468344  749135 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:13:11.469889  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:11.472633  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:11.472955  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:11.472986  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:11.473236  749135 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:13:11.477639  749135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:13:11.490126  749135 kubeadm.go:883] updating cluster {Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:13:11.490246  749135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:13:11.490303  749135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:13:11.522179  749135 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:13:11.522257  749135 ssh_runner.go:195] Run: which lz4
	I0920 18:13:11.526368  749135 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:13:11.530534  749135 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:13:11.530569  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:13:12.754100  749135 crio.go:462] duration metric: took 1.227762585s to copy over tarball
	I0920 18:13:12.754195  749135 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:13:14.814758  749135 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060523421s)
	I0920 18:13:14.814798  749135 crio.go:469] duration metric: took 2.06066428s to extract the tarball
	I0920 18:13:14.814808  749135 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:13:14.850931  749135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:13:14.892855  749135 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:13:14.892884  749135 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:13:14.892894  749135 kubeadm.go:934] updating node { 192.168.39.237 8443 v1.31.1 crio true true} ...
	I0920 18:13:14.893002  749135 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-446299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:13:14.893069  749135 ssh_runner.go:195] Run: crio config
	I0920 18:13:14.935948  749135 cni.go:84] Creating CNI manager for ""
	I0920 18:13:14.935974  749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:13:14.935987  749135 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:13:14.936010  749135 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-446299 NodeName:addons-446299 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:13:14.936153  749135 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-446299"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:13:14.936224  749135 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:13:14.945879  749135 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:13:14.945951  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:13:14.955112  749135 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:13:14.971443  749135 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:13:14.987494  749135 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0920 18:13:15.004128  749135 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I0920 18:13:15.008311  749135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:13:15.020386  749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:13:15.143207  749135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:13:15.160928  749135 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299 for IP: 192.168.39.237
	I0920 18:13:15.160952  749135 certs.go:194] generating shared ca certs ...
	I0920 18:13:15.160971  749135 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.161127  749135 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:13:15.288325  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt ...
	I0920 18:13:15.288359  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt: {Name:mkd07e710befe398f359697123be87266dbb73cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.288526  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key ...
	I0920 18:13:15.288537  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key: {Name:mk8452559729a4e6fe54cdcaa3db5cb2d03b365d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.288610  749135 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:13:15.460720  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt ...
	I0920 18:13:15.460749  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt: {Name:mkd5912367400d11fe28d50162d9491c1c026ad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.460926  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key ...
	I0920 18:13:15.460946  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key: {Name:mk7b4a10567303413b299060d87451a86c82a4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.461047  749135 certs.go:256] generating profile certs ...
	I0920 18:13:15.461131  749135 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key
	I0920 18:13:15.461148  749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt with IP's: []
	I0920 18:13:15.666412  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt ...
	I0920 18:13:15.666455  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: {Name:mkef01489d7dcf2bfb46ac5af11bed50283fb691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.666668  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key ...
	I0920 18:13:15.666687  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key: {Name:mkce7236a454e2c0202c83ef853c169198fb2f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.666791  749135 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387
	I0920 18:13:15.666816  749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237]
	I0920 18:13:15.705625  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 ...
	I0920 18:13:15.705654  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387: {Name:mk64bf6bb73ff35990c8781efc3d30626dc3ca21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.705826  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387 ...
	I0920 18:13:15.705843  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387: {Name:mk18ead88f15a69013b31853d623fd0cb8c39466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.705941  749135 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt
	I0920 18:13:15.706040  749135 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key
	I0920 18:13:15.706114  749135 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key
	I0920 18:13:15.706140  749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt with IP's: []
	I0920 18:13:15.788260  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt ...
	I0920 18:13:15.788293  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt: {Name:mk5ff8fc31363db98a0f0ca7278de49be24b8420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.788475  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key ...
	I0920 18:13:15.788494  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key: {Name:mk7a90a72aaffce450a2196a523cb38d8ddfd4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.788714  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:13:15.788762  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:13:15.788796  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:13:15.788835  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:13:15.789513  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:13:15.814280  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:13:15.838979  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:13:15.861251  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:13:15.883772  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:13:15.906899  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:13:15.930055  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:13:15.952960  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:13:15.976078  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:13:15.998990  749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:13:16.015378  749135 ssh_runner.go:195] Run: openssl version
	I0920 18:13:16.021288  749135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:13:16.031743  749135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:13:16.036218  749135 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:13:16.036292  749135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:13:16.041983  749135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:13:16.052410  749135 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:13:16.056509  749135 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:13:16.056561  749135 kubeadm.go:392] StartCluster: {Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:13:16.056643  749135 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:13:16.056724  749135 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:13:16.093233  749135 cri.go:89] found id: ""
	I0920 18:13:16.093305  749135 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:13:16.103183  749135 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:13:16.112220  749135 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:13:16.121055  749135 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:13:16.121076  749135 kubeadm.go:157] found existing configuration files:
	
	I0920 18:13:16.121125  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:13:16.129727  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:13:16.129793  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:13:16.138769  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:13:16.147343  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:13:16.147401  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:13:16.156084  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:13:16.164356  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:13:16.164409  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:13:16.172957  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:13:16.181269  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:13:16.181319  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:13:16.189971  749135 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:13:16.241816  749135 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:13:16.242023  749135 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:13:16.343705  749135 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:13:16.343865  749135 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:13:16.344016  749135 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:13:16.353422  749135 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:13:16.356505  749135 out.go:235]   - Generating certificates and keys ...
	I0920 18:13:16.356621  749135 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:13:16.356707  749135 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:13:16.567905  749135 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:13:16.678138  749135 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:13:16.903150  749135 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:13:17.220781  749135 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:13:17.330970  749135 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:13:17.331262  749135 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-446299 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0920 18:13:17.404562  749135 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:13:17.404723  749135 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-446299 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0920 18:13:17.558748  749135 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:13:17.723982  749135 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:13:17.850510  749135 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:13:17.850712  749135 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:13:17.910185  749135 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:13:18.072173  749135 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:13:18.135494  749135 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:13:18.547143  749135 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:13:18.760484  749135 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:13:18.761203  749135 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:13:18.765007  749135 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:13:18.801126  749135 out.go:235]   - Booting up control plane ...
	I0920 18:13:18.801251  749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:13:18.801344  749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:13:18.801424  749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:13:18.801571  749135 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:13:18.801721  749135 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:13:18.801785  749135 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:13:18.927609  749135 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:13:18.927774  749135 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:13:19.928576  749135 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001817815s
	I0920 18:13:19.928734  749135 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:13:24.427415  749135 kubeadm.go:310] [api-check] The API server is healthy after 4.501490258s
	I0920 18:13:24.439460  749135 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:13:24.456660  749135 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:13:24.489726  749135 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:13:24.489974  749135 kubeadm.go:310] [mark-control-plane] Marking the node addons-446299 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:13:24.502419  749135 kubeadm.go:310] [bootstrap-token] Using token: 2qbco4.c4cth5cwyyzw51bf
	I0920 18:13:24.503870  749135 out.go:235]   - Configuring RBAC rules ...
	I0920 18:13:24.504029  749135 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:13:24.514334  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:13:24.520831  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:13:24.524418  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:13:24.527658  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:13:24.533751  749135 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:13:24.833210  749135 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:13:25.263206  749135 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:13:25.833304  749135 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:13:25.834184  749135 kubeadm.go:310] 
	I0920 18:13:25.834298  749135 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:13:25.834327  749135 kubeadm.go:310] 
	I0920 18:13:25.834438  749135 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:13:25.834450  749135 kubeadm.go:310] 
	I0920 18:13:25.834490  749135 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:13:25.834595  749135 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:13:25.834657  749135 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:13:25.834674  749135 kubeadm.go:310] 
	I0920 18:13:25.834745  749135 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:13:25.834754  749135 kubeadm.go:310] 
	I0920 18:13:25.834980  749135 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:13:25.834997  749135 kubeadm.go:310] 
	I0920 18:13:25.835059  749135 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:13:25.835163  749135 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:13:25.835253  749135 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:13:25.835263  749135 kubeadm.go:310] 
	I0920 18:13:25.835376  749135 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:13:25.835483  749135 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:13:25.835490  749135 kubeadm.go:310] 
	I0920 18:13:25.835595  749135 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2qbco4.c4cth5cwyyzw51bf \
	I0920 18:13:25.835757  749135 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d \
	I0920 18:13:25.835806  749135 kubeadm.go:310] 	--control-plane 
	I0920 18:13:25.835816  749135 kubeadm.go:310] 
	I0920 18:13:25.835914  749135 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:13:25.835926  749135 kubeadm.go:310] 
	I0920 18:13:25.836021  749135 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2qbco4.c4cth5cwyyzw51bf \
	I0920 18:13:25.836149  749135 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d 
	I0920 18:13:25.837593  749135 kubeadm.go:310] W0920 18:13:16.222475     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:13:25.837868  749135 kubeadm.go:310] W0920 18:13:16.223486     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:13:25.837990  749135 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:13:25.838019  749135 cni.go:84] Creating CNI manager for ""
	I0920 18:13:25.838028  749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:13:25.839751  749135 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:13:25.840949  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:13:25.852783  749135 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:13:25.871921  749135 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:13:25.871998  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:25.872010  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-446299 minikube.k8s.io/updated_at=2024_09_20T18_13_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-446299 minikube.k8s.io/primary=true
	I0920 18:13:25.893378  749135 ops.go:34] apiserver oom_adj: -16
	I0920 18:13:26.025723  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:26.526635  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:27.026038  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:27.526100  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:28.026195  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:28.526494  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:29.026560  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:29.526369  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:30.026015  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:30.116670  749135 kubeadm.go:1113] duration metric: took 4.244739753s to wait for elevateKubeSystemPrivileges
	I0920 18:13:30.116706  749135 kubeadm.go:394] duration metric: took 14.06015239s to StartCluster
	I0920 18:13:30.116726  749135 settings.go:142] acquiring lock: {Name:mk0bd1e421bf437575c076c52c1ff2f74497a1ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:30.116861  749135 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:13:30.117227  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/kubeconfig: {Name:mk275c54cf52b0ccdc22fcaa39c7b9c31092c648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:30.117422  749135 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:13:30.117448  749135 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:13:30.117512  749135 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 18:13:30.117640  749135 addons.go:69] Setting yakd=true in profile "addons-446299"
	I0920 18:13:30.117667  749135 addons.go:234] Setting addon yakd=true in "addons-446299"
	I0920 18:13:30.117700  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117727  749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:13:30.117688  749135 addons.go:69] Setting default-storageclass=true in profile "addons-446299"
	I0920 18:13:30.117804  749135 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-446299"
	I0920 18:13:30.117694  749135 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-446299"
	I0920 18:13:30.117828  749135 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-446299"
	I0920 18:13:30.117867  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117708  749135 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-446299"
	I0920 18:13:30.117998  749135 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-446299"
	I0920 18:13:30.117714  749135 addons.go:69] Setting inspektor-gadget=true in profile "addons-446299"
	I0920 18:13:30.118028  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.118044  749135 addons.go:234] Setting addon inspektor-gadget=true in "addons-446299"
	I0920 18:13:30.118082  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117716  749135 addons.go:69] Setting gcp-auth=true in profile "addons-446299"
	I0920 18:13:30.118200  749135 mustload.go:65] Loading cluster: addons-446299
	I0920 18:13:30.118199  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118219  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.117703  749135 addons.go:69] Setting ingress-dns=true in profile "addons-446299"
	I0920 18:13:30.118237  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118242  749135 addons.go:234] Setting addon ingress-dns=true in "addons-446299"
	I0920 18:13:30.118250  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118270  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.118376  749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:13:30.118380  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118401  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118492  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118530  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118647  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118678  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117720  749135 addons.go:69] Setting metrics-server=true in profile "addons-446299"
	I0920 18:13:30.118748  749135 addons.go:234] Setting addon metrics-server=true in "addons-446299"
	I0920 18:13:30.118777  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.118823  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118831  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118883  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118889  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117726  749135 addons.go:69] Setting ingress=true in profile "addons-446299"
	I0920 18:13:30.119096  749135 addons.go:234] Setting addon ingress=true in "addons-446299"
	I0920 18:13:30.119137  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117736  749135 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-446299"
	I0920 18:13:30.119353  749135 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-446299"
	I0920 18:13:30.119501  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.119521  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.119740  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.117735  749135 addons.go:69] Setting registry=true in profile "addons-446299"
	I0920 18:13:30.119761  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.119766  749135 addons.go:234] Setting addon registry=true in "addons-446299"
	I0920 18:13:30.119795  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.120169  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.120211  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117735  749135 addons.go:69] Setting cloud-spanner=true in profile "addons-446299"
	I0920 18:13:30.120247  749135 addons.go:234] Setting addon cloud-spanner=true in "addons-446299"
	I0920 18:13:30.117743  749135 addons.go:69] Setting volcano=true in profile "addons-446299"
	I0920 18:13:30.120264  749135 addons.go:234] Setting addon volcano=true in "addons-446299"
	I0920 18:13:30.120292  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.120352  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117744  749135 addons.go:69] Setting storage-provisioner=true in profile "addons-446299"
	I0920 18:13:30.120495  749135 addons.go:234] Setting addon storage-provisioner=true in "addons-446299"
	I0920 18:13:30.120536  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.120768  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.120790  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117753  749135 addons.go:69] Setting volumesnapshots=true in profile "addons-446299"
	I0920 18:13:30.120925  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.120933  749135 addons.go:234] Setting addon volumesnapshots=true in "addons-446299"
	I0920 18:13:30.120955  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.120966  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.122929  749135 out.go:177] * Verifying Kubernetes components...
	I0920 18:13:30.124310  749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:13:30.139606  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0920 18:13:30.139626  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43313
	I0920 18:13:30.139664  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
	I0920 18:13:30.139664  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0920 18:13:30.151212  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I0920 18:13:30.151245  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.151251  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34369
	I0920 18:13:30.151274  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.151393  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.151405  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.151438  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.151856  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.151891  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.152064  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152188  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152245  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152411  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152423  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.152487  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152534  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152664  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152678  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.152736  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.152850  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152861  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.152984  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152995  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.153048  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.153483  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.153515  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.154013  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0920 18:13:30.154291  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.154314  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.154382  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.154805  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.154867  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.155632  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.155794  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.155815  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.155882  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.156284  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.156326  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.159168  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.159296  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.159618  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.159652  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.159773  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.159808  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.160117  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.160143  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.160217  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.160647  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.161813  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.161856  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.164600  749135 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-446299"
	I0920 18:13:30.164649  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.165039  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.165072  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.176807  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I0920 18:13:30.177469  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.178091  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.178111  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.178583  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.179242  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.179271  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.185984  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43023
	I0920 18:13:30.186586  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.187123  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.187144  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.187554  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.188160  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.188203  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.193206  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0920 18:13:30.193417  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0920 18:13:30.193849  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.194099  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.194452  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.194471  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.194968  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.195118  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.195132  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.195349  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0920 18:13:30.195438  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.196077  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.196556  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.196580  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.197033  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.197694  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.197734  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.197960  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36057
	I0920 18:13:30.198500  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.198621  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.198726  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I0920 18:13:30.198876  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.199030  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.199369  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.199385  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.199416  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.199438  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.199710  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.200318  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.200362  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.200438  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.201288  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.201893  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.201916  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.203229  749135 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 18:13:30.204746  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 18:13:30.204766  749135 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 18:13:30.204788  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.206295  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.206675  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.207700  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I0920 18:13:30.208147  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.208668  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.208691  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.209400  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.209672  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.209714  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.210328  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.210357  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.210920  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.210948  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.211140  749135 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 18:13:30.211638  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.212145  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.212323  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.212494  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.212630  749135 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:13:30.212646  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 18:13:30.212664  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.213593  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0920 18:13:30.214660  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34213
	I0920 18:13:30.215405  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.215903  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.215924  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.216384  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.216437  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.216507  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.216537  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.216592  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0920 18:13:30.217041  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.217047  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.217305  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.217448  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.217585  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.218334  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.218356  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.218795  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.219018  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.219181  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0920 18:13:30.219880  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.219925  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.219979  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.220067  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.220460  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.220482  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.220702  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.220722  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.220787  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
	I0920 18:13:30.221095  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.221183  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.221329  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.221386  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:30.221397  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:30.223334  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.223352  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.223398  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:30.223412  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:30.223419  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:30.223427  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:30.223433  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:30.223529  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.224012  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:30.224041  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:30.224048  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 18:13:30.224154  749135 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 18:13:30.224543  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0920 18:13:30.225486  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.225509  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.226183  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.226202  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.226560  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 18:13:30.226986  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.227285  749135 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 18:13:30.227644  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.227684  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.228253  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34967
	I0920 18:13:30.228649  749135 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:13:30.228675  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 18:13:30.228697  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.229313  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
	I0920 18:13:30.229673  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.230049  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 18:13:30.230142  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.230158  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.230485  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.230672  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.231280  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.231806  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0920 18:13:30.231963  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.231988  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.232145  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.232332  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.232428  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.232440  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 18:13:30.232482  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.232696  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.233542  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.233796  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:13:30.234419  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.234438  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.234783  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 18:13:30.235010  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.235348  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.236127  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 18:13:30.236900  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0920 18:13:30.237440  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 18:13:30.237599  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0920 18:13:30.238719  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:13:30.239949  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 18:13:30.240129  749135 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:13:30.240146  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 18:13:30.240162  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.242347  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 18:13:30.243261  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.243644  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.243673  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.243908  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.244083  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.244194  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.244349  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.244407  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
	I0920 18:13:30.244610  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 18:13:30.245914  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 18:13:30.245941  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 18:13:30.245963  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.246673  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41943
	I0920 18:13:30.247429  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.247556  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.247990  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.248061  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.248074  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.248079  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.248343  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.248449  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.248449  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.248468  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.248596  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.248607  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.248648  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.248833  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.249170  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.249280  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.249352  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.249393  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.249409  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.250084  749135 addons.go:234] Setting addon default-storageclass=true in "addons-446299"
	I0920 18:13:30.250124  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.250508  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.250532  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.251170  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.251192  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.251274  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.251488  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.251857  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.251862  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.251910  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.251940  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.252078  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.252212  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.252224  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.252440  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.252553  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.252748  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.252820  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.252833  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.253735  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.253941  749135 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 18:13:30.254017  749135 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 18:13:30.253980  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.254455  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.254656  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.254870  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.254873  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.255177  749135 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:13:30.255187  749135 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 18:13:30.255205  749135 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 18:13:30.255226  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.255274  749135 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 18:13:30.255278  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.255288  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 18:13:30.255303  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.256466  749135 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 18:13:30.256532  749135 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:13:30.256552  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:13:30.256570  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.258154  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 18:13:30.259159  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 18:13:30.259174  749135 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 18:13:30.259188  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.259235  749135 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 18:13:30.260368  749135 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 18:13:30.260382  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 18:13:30.260394  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.260519  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.260844  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.260873  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.261038  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.261196  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.262948  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.263013  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.263033  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.263050  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.263161  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.263545  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.263701  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.264179  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.264417  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.264628  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.265340  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.265500  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.265732  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.265751  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.266060  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.266249  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.266266  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.266441  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.266593  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.266625  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.266670  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.266742  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.267063  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.267118  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.267232  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.267247  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.267357  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.267382  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.267549  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.267839  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.269511  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0920 18:13:30.269878  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.270901  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.270926  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.271296  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.271468  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.273221  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.274917  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0920 18:13:30.275136  749135 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 18:13:30.275446  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.276076  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.276096  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.276414  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:13:30.276440  749135 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:13:30.276461  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.276501  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.276736  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.278674  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.280057  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.280316  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.280342  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.280375  749135 out.go:177]   - Using image docker.io/busybox:stable
	I0920 18:13:30.280530  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.280706  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.280828  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.280961  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	W0920 18:13:30.281845  749135 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35600->192.168.39.237:22: read: connection reset by peer
	I0920 18:13:30.281937  749135 retry.go:31] will retry after 148.234221ms: ssh: handshake failed: read tcp 192.168.39.1:35600->192.168.39.237:22: read: connection reset by peer
	I0920 18:13:30.282766  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37633
	I0920 18:13:30.282794  749135 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 18:13:30.283193  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.283743  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.283764  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.284120  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.284286  749135 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:13:30.284302  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 18:13:30.284319  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.284696  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.284848  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.290962  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.290998  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.291015  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.291035  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.291443  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.291607  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.291761  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.301013  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0920 18:13:30.301540  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.302060  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.302090  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.302449  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.302621  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.303997  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.304220  749135 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:13:30.304236  749135 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:13:30.304256  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.307237  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.307715  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.307749  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.307899  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.308079  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.308237  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.308392  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.604495  749135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:13:30.604525  749135 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:13:30.661112  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 18:13:30.661146  749135 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 18:13:30.662437  749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 18:13:30.662469  749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 18:13:30.705589  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:13:30.750149  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 18:13:30.750187  749135 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 18:13:30.753172  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 18:13:30.755196  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:13:30.771513  749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 18:13:30.771540  749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 18:13:30.797810  749135 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 18:13:30.797835  749135 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 18:13:30.807101  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:13:30.868448  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:13:30.869944  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 18:13:30.869963  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 18:13:30.871146  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:13:30.896462  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:13:30.900930  749135 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 18:13:30.900959  749135 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 18:13:30.906831  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:13:30.906880  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 18:13:30.933744  749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 18:13:30.933774  749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 18:13:30.969038  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 18:13:30.969076  749135 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 18:13:31.000321  749135 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 18:13:31.000354  749135 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 18:13:31.182228  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 18:13:31.182256  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 18:13:31.198470  749135 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:13:31.198506  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 18:13:31.232002  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 18:13:31.232027  749135 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 18:13:31.241138  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:13:31.241162  749135 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:13:31.303359  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:13:31.303389  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 18:13:31.308659  749135 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 18:13:31.308686  749135 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 18:13:31.411918  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:13:31.444332  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 18:13:31.444368  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 18:13:31.517643  749135 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:13:31.517669  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 18:13:31.522528  749135 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 18:13:31.522555  749135 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 18:13:31.527932  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:13:31.527961  749135 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:13:31.598680  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:13:31.753266  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 18:13:31.753305  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 18:13:31.825090  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:13:31.868789  749135 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 18:13:31.868821  749135 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 18:13:31.871872  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:13:32.035165  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 18:13:32.035205  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 18:13:32.325034  749135 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 18:13:32.325068  749135 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 18:13:32.426301  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 18:13:32.426330  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 18:13:32.734227  749135 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:13:32.734252  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 18:13:32.776162  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 18:13:32.776201  749135 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 18:13:32.973816  749135 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.369238207s)
	I0920 18:13:32.973844  749135 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.369303036s)
	I0920 18:13:32.973868  749135 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 18:13:32.974717  749135 node_ready.go:35] waiting up to 6m0s for node "addons-446299" to be "Ready" ...
	I0920 18:13:32.978640  749135 node_ready.go:49] node "addons-446299" has status "Ready":"True"
	I0920 18:13:32.978660  749135 node_ready.go:38] duration metric: took 3.921107ms for node "addons-446299" to be "Ready" ...
	I0920 18:13:32.978672  749135 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:13:32.990987  749135 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:33.092955  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:13:33.125330  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 18:13:33.125357  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 18:13:33.271505  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 18:13:33.271534  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 18:13:33.497723  749135 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-446299" context rescaled to 1 replicas
	I0920 18:13:33.600812  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:13:33.600847  749135 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 18:13:33.656016  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.902807697s)
	I0920 18:13:33.656075  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656075  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.900839477s)
	I0920 18:13:33.656016  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.950386811s)
	I0920 18:13:33.656109  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656121  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656127  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656090  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656146  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656567  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.656587  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.656608  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.656624  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.656627  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.656653  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.656665  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656676  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656635  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.656718  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656637  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.656744  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.656760  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656767  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656730  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.657076  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.657118  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.657119  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.657096  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.657156  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.657263  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.657279  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.758218  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:13:35.015799  749135 pod_ready.go:103] pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:35.494820  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.687683083s)
	I0920 18:13:35.494889  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.494891  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.626405857s)
	I0920 18:13:35.494920  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.494932  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.494930  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.623755287s)
	I0920 18:13:35.494950  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.494983  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.495052  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.495370  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.495388  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.495396  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.495404  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.496899  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.496907  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.496907  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.496946  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.496958  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.496966  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.496977  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.496990  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.496999  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.497065  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.497077  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.497089  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.497098  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.497258  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.497276  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.498278  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.498290  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.498301  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.545445  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.545475  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.545718  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.545745  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.545752  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	W0920 18:13:35.545859  749135 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 18:13:35.559802  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.559831  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.560074  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.560092  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.560108  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:36.023603  749135 pod_ready.go:93] pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.023630  749135 pod_ready.go:82] duration metric: took 3.032619357s for pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.023643  749135 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.059659  749135 pod_ready.go:93] pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.059693  749135 pod_ready.go:82] duration metric: took 36.040161ms for pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.059705  749135 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.075393  749135 pod_ready.go:93] pod "etcd-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.075428  749135 pod_ready.go:82] duration metric: took 15.714418ms for pod "etcd-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.075441  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.089509  749135 pod_ready.go:93] pod "kube-apiserver-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.089536  749135 pod_ready.go:82] duration metric: took 14.086774ms for pod "kube-apiserver-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.089546  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.600534  749135 pod_ready.go:93] pod "kube-controller-manager-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.600565  749135 pod_ready.go:82] duration metric: took 511.011851ms for pod "kube-controller-manager-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.600579  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9pcgb" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.797080  749135 pod_ready.go:93] pod "kube-proxy-9pcgb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.797111  749135 pod_ready.go:82] duration metric: took 196.523175ms for pod "kube-proxy-9pcgb" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.797123  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:37.195153  749135 pod_ready.go:93] pod "kube-scheduler-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:37.195185  749135 pod_ready.go:82] duration metric: took 398.053895ms for pod "kube-scheduler-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:37.195198  749135 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:37.260708  749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 18:13:37.260749  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:37.264035  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.264543  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:37.264579  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.264739  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:37.264958  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:37.265141  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:37.265285  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:37.472764  749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 18:13:37.656998  749135 addons.go:234] Setting addon gcp-auth=true in "addons-446299"
	I0920 18:13:37.657072  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:37.657494  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:37.657545  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:37.673709  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0920 18:13:37.674398  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:37.674958  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:37.674981  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:37.675363  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:37.675843  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:37.675888  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:37.691444  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38543
	I0920 18:13:37.692042  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:37.692560  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:37.692593  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:37.693006  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:37.693249  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:37.695166  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:37.695451  749135 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 18:13:37.695481  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:37.698450  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.698921  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:37.698953  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.699128  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:37.699312  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:37.699441  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:37.699604  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:38.819493  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.922986564s)
	I0920 18:13:38.819541  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.407583803s)
	I0920 18:13:38.819575  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819591  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819607  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819648  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.220925429s)
	I0920 18:13:38.819598  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819686  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819705  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819778  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.994650356s)
	W0920 18:13:38.819815  749135 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:13:38.819840  749135 retry.go:31] will retry after 365.705658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:13:38.819845  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.947942371s)
	I0920 18:13:38.819873  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819885  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819961  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.726965652s)
	I0920 18:13:38.820001  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820012  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820227  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820244  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820285  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820295  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820413  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.820433  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.820460  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820467  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820475  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820481  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820629  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820639  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820647  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820655  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820718  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.820773  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820781  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820789  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820795  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.821299  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.821316  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.821349  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.821355  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.821365  749135 addons.go:475] Verifying addon registry=true in "addons-446299"
	I0920 18:13:38.821906  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.821917  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.821926  749135 addons.go:475] Verifying addon ingress=true in "addons-446299"
	I0920 18:13:38.821997  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822026  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.822038  749135 addons.go:475] Verifying addon metrics-server=true in "addons-446299"
	I0920 18:13:38.822070  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822084  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.822092  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.822100  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.822128  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822143  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.822495  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.822542  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822551  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.823406  749135 out.go:177] * Verifying ingress addon...
	I0920 18:13:38.823868  749135 out.go:177] * Verifying registry addon...
	I0920 18:13:38.824871  749135 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-446299 service yakd-dashboard -n yakd-dashboard
	
	I0920 18:13:38.825597  749135 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 18:13:38.826680  749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 18:13:38.844205  749135 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 18:13:38.844236  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:38.850356  749135 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:13:38.850383  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:39.186375  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:13:39.200878  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:39.330411  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:39.330769  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:39.849376  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:39.851690  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:40.361850  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:40.362230  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:41.034778  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:41.035000  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:41.038162  749135 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.342687523s)
	I0920 18:13:41.038403  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.280132041s)
	I0920 18:13:41.038461  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.038481  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.038819  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.038884  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.038905  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.038922  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.039163  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.039205  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.039225  749135 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-446299"
	I0920 18:13:41.039205  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:41.041287  749135 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 18:13:41.041290  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:13:41.043438  749135 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 18:13:41.044297  749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 18:13:41.044713  749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 18:13:41.044732  749135 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 18:13:41.101841  749135 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:13:41.101863  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:41.130328  749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 18:13:41.130361  749135 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 18:13:41.246926  749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:13:41.246950  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 18:13:41.330722  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:41.331217  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:41.367190  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:13:41.375612  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.189187999s)
	I0920 18:13:41.375679  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.375703  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.376082  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.376123  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.376131  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:41.376140  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.376180  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.376437  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.376461  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.376464  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:41.548363  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:41.701651  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:41.831758  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:41.831933  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:42.053967  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:42.331450  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:42.331860  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:42.559368  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:42.796101  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.428861154s)
	I0920 18:13:42.796164  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:42.796186  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:42.796539  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:42.796652  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:42.796628  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:42.796665  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:42.796674  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:42.796931  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:42.796948  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:42.796971  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:42.798018  749135 addons.go:475] Verifying addon gcp-auth=true in "addons-446299"
	I0920 18:13:42.799750  749135 out.go:177] * Verifying gcp-auth addon...
	I0920 18:13:42.801961  749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 18:13:42.813536  749135 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 18:13:42.813557  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:42.834100  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:42.834512  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:43.050004  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:43.305311  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:43.330407  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:43.331586  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:43.549945  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:43.702111  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:43.806287  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:43.830332  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:43.830560  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:44.050313  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:44.307181  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:44.332062  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:44.332579  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:44.549621  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:44.806074  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:44.830087  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:44.830821  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:45.049798  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:45.305355  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:45.329798  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:45.330472  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:45.549159  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:45.702368  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:45.805600  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:45.830331  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:45.831003  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:46.048681  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:46.476235  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:46.476881  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:46.477765  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:46.576766  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:46.805777  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:46.830583  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:46.831463  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:47.050496  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:47.307091  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:47.330512  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:47.331048  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:47.549305  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:47.805735  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:47.830215  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:47.831512  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:48.049902  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:48.202178  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:48.306243  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:48.329718  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:48.332280  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:48.550170  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:48.805429  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:48.829830  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:48.831490  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:49.050407  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:49.305950  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:49.331188  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:49.331284  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:49.549193  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:49.805377  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:49.831064  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:49.831335  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:50.050205  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:50.205469  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:50.306610  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:50.330226  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:50.331728  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:50.548853  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:50.806045  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:50.830924  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:50.831062  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:51.049036  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:51.305994  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:51.330295  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:51.330905  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:51.549433  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:51.805870  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:51.830479  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:51.831665  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:52.050500  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:52.305644  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:52.330460  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:52.330909  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:52.549056  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:52.700600  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:52.805458  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:52.829967  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:52.831274  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:53.049224  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:53.306145  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:53.330699  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:53.331032  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:53.548388  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:54.211235  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:54.211371  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:54.211581  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:54.212019  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:54.305931  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:54.332757  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:54.333316  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:54.550241  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:54.701439  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:54.805276  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:54.830616  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:54.831417  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:55.057083  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:55.305836  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:55.330687  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:55.331243  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:55.550673  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:55.701690  749135 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:55.701725  749135 pod_ready.go:82] duration metric: took 18.50651845s for pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:55.701734  749135 pod_ready.go:39] duration metric: took 22.723049339s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:13:55.701754  749135 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:13:55.701817  749135 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:13:55.736899  749135 api_server.go:72] duration metric: took 25.619420852s to wait for apiserver process to appear ...
	I0920 18:13:55.736929  749135 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:13:55.736952  749135 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0920 18:13:55.741901  749135 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0920 18:13:55.743609  749135 api_server.go:141] control plane version: v1.31.1
	I0920 18:13:55.743635  749135 api_server.go:131] duration metric: took 6.69997ms to wait for apiserver health ...
	I0920 18:13:55.743646  749135 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:13:55.757231  749135 system_pods.go:59] 17 kube-system pods found
	I0920 18:13:55.757585  749135 system_pods.go:61] "coredns-7c65d6cfc9-8b5fx" [226fc466-f0b5-4501-8879-b8b9b8d758ac] Running
	I0920 18:13:55.757615  749135 system_pods.go:61] "csi-hostpath-attacher-0" [b131974d-0f4b-4bc6-bec3-d4c797279aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 18:13:55.757633  749135 system_pods.go:61] "csi-hostpath-resizer-0" [684355d7-d68e-4357-8103-d8350a38ea37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 18:13:55.757647  749135 system_pods.go:61] "csi-hostpathplugin-fcmx5" [1576357c-2e2c-469a-b069-dcac225f49c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 18:13:55.757654  749135 system_pods.go:61] "etcd-addons-446299" [c82607ca-b677-4592-935a-a32dad76e79c] Running
	I0920 18:13:55.757662  749135 system_pods.go:61] "kube-apiserver-addons-446299" [93375989-de9f-4fea-afcc-44d35775ddd6] Running
	I0920 18:13:55.757668  749135 system_pods.go:61] "kube-controller-manager-addons-446299" [4c06855c-f18c-4df4-bd04-584c8594a744] Running
	I0920 18:13:55.757677  749135 system_pods.go:61] "kube-ingress-dns-minikube" [631849c1-f984-4e83-b07b-6b2ed4eb0697] Running
	I0920 18:13:55.757682  749135 system_pods.go:61] "kube-proxy-9pcgb" [934faade-c115-4ced-9bb6-c22a2fe014f2] Running
	I0920 18:13:55.757689  749135 system_pods.go:61] "kube-scheduler-addons-446299" [ce4ce9a3-dd64-47ed-a920-b6c5359c80a7] Running
	I0920 18:13:55.757697  749135 system_pods.go:61] "metrics-server-84c5f94fbc-dgfgh" [84513540-b090-4d24-b6e0-9ed764434018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:13:55.757705  749135 system_pods.go:61] "nvidia-device-plugin-daemonset-6l2l2" [c6db8268-e330-413b-9107-88c63f861e42] Running
	I0920 18:13:55.757714  749135 system_pods.go:61] "registry-66c9cd494c-vxc6t" [10b4cecb-c85b-45ef-8043-e88a81971d51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 18:13:55.757725  749135 system_pods.go:61] "registry-proxy-bqdmf" [11ab987d-a80f-412a-8a15-03a5898a2e9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 18:13:55.757738  749135 system_pods.go:61] "snapshot-controller-56fcc65765-4qwlb" [d4cd83fc-a074-4317-9b02-22010ae0ca66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.757750  749135 system_pods.go:61] "snapshot-controller-56fcc65765-8rk95" [63d1f200-a587-488c-82d3-bf38586a6fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.757759  749135 system_pods.go:61] "storage-provisioner" [0e9e378d-208e-46e0-a2be-70f96e59408a] Running
	I0920 18:13:55.757770  749135 system_pods.go:74] duration metric: took 14.117036ms to wait for pod list to return data ...
	I0920 18:13:55.757782  749135 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:13:55.762579  749135 default_sa.go:45] found service account: "default"
	I0920 18:13:55.762610  749135 default_sa.go:55] duration metric: took 4.817698ms for default service account to be created ...
	I0920 18:13:55.762622  749135 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:13:55.772780  749135 system_pods.go:86] 17 kube-system pods found
	I0920 18:13:55.772808  749135 system_pods.go:89] "coredns-7c65d6cfc9-8b5fx" [226fc466-f0b5-4501-8879-b8b9b8d758ac] Running
	I0920 18:13:55.772816  749135 system_pods.go:89] "csi-hostpath-attacher-0" [b131974d-0f4b-4bc6-bec3-d4c797279aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 18:13:55.772822  749135 system_pods.go:89] "csi-hostpath-resizer-0" [684355d7-d68e-4357-8103-d8350a38ea37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 18:13:55.772830  749135 system_pods.go:89] "csi-hostpathplugin-fcmx5" [1576357c-2e2c-469a-b069-dcac225f49c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 18:13:55.772834  749135 system_pods.go:89] "etcd-addons-446299" [c82607ca-b677-4592-935a-a32dad76e79c] Running
	I0920 18:13:55.772839  749135 system_pods.go:89] "kube-apiserver-addons-446299" [93375989-de9f-4fea-afcc-44d35775ddd6] Running
	I0920 18:13:55.772842  749135 system_pods.go:89] "kube-controller-manager-addons-446299" [4c06855c-f18c-4df4-bd04-584c8594a744] Running
	I0920 18:13:55.772847  749135 system_pods.go:89] "kube-ingress-dns-minikube" [631849c1-f984-4e83-b07b-6b2ed4eb0697] Running
	I0920 18:13:55.772851  749135 system_pods.go:89] "kube-proxy-9pcgb" [934faade-c115-4ced-9bb6-c22a2fe014f2] Running
	I0920 18:13:55.772856  749135 system_pods.go:89] "kube-scheduler-addons-446299" [ce4ce9a3-dd64-47ed-a920-b6c5359c80a7] Running
	I0920 18:13:55.772865  749135 system_pods.go:89] "metrics-server-84c5f94fbc-dgfgh" [84513540-b090-4d24-b6e0-9ed764434018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:13:55.772922  749135 system_pods.go:89] "nvidia-device-plugin-daemonset-6l2l2" [c6db8268-e330-413b-9107-88c63f861e42] Running
	I0920 18:13:55.772931  749135 system_pods.go:89] "registry-66c9cd494c-vxc6t" [10b4cecb-c85b-45ef-8043-e88a81971d51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 18:13:55.772936  749135 system_pods.go:89] "registry-proxy-bqdmf" [11ab987d-a80f-412a-8a15-03a5898a2e9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 18:13:55.772946  749135 system_pods.go:89] "snapshot-controller-56fcc65765-4qwlb" [d4cd83fc-a074-4317-9b02-22010ae0ca66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.772953  749135 system_pods.go:89] "snapshot-controller-56fcc65765-8rk95" [63d1f200-a587-488c-82d3-bf38586a6fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.772957  749135 system_pods.go:89] "storage-provisioner" [0e9e378d-208e-46e0-a2be-70f96e59408a] Running
	I0920 18:13:55.772963  749135 system_pods.go:126] duration metric: took 10.336403ms to wait for k8s-apps to be running ...
	I0920 18:13:55.772972  749135 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:13:55.773018  749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:13:55.793348  749135 system_svc.go:56] duration metric: took 20.361414ms WaitForService to wait for kubelet
	I0920 18:13:55.793389  749135 kubeadm.go:582] duration metric: took 25.675912921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:13:55.793417  749135 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:13:55.802544  749135 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:13:55.802600  749135 node_conditions.go:123] node cpu capacity is 2
	I0920 18:13:55.802617  749135 node_conditions.go:105] duration metric: took 9.193115ms to run NodePressure ...
	I0920 18:13:55.802639  749135 start.go:241] waiting for startup goroutines ...
	I0920 18:13:55.807268  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:55.834016  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:55.834628  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:56.049150  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:56.305873  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:56.331424  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:56.331798  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:56.550328  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:56.806065  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:56.829659  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:56.830161  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:57.049081  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:57.306075  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:57.329355  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:57.330540  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:57.549591  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:57.805900  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:57.830374  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:57.832330  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:58.049092  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:58.306271  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:58.329770  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:58.331160  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:58.922331  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:58.923063  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:58.923163  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:58.924173  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:59.050995  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:59.306609  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:59.410277  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:59.410618  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:59.549349  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:59.806119  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:59.829906  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:59.830124  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:00.049161  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:00.306487  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:00.330117  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:00.331103  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:00.549561  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:00.806760  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:00.831148  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:00.831297  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:01.050001  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:01.306298  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:01.407860  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:01.408083  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:01.548728  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:01.806320  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:01.830021  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:01.830689  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:02.048991  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:02.305521  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:02.330400  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:02.331175  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:02.549048  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:02.805598  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:02.830127  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:02.830327  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:03.049629  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:03.305858  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:03.331322  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:03.331679  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:03.548558  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:03.820166  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:03.830589  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:03.832021  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:04.465452  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:04.465905  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:04.465965  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:04.466066  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:04.565162  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:04.805221  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:04.830427  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:04.830573  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:05.050021  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:05.305449  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:05.330307  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:05.331288  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:05.549216  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:05.805952  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:05.830822  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:05.830882  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:06.048888  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:06.305947  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:06.330556  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:06.330915  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:06.549018  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:06.806964  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:06.841818  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:06.843261  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:07.048576  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:07.305982  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:07.330357  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:07.330437  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:07.549676  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:07.813909  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:07.830340  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:07.830795  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:08.050020  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:08.306364  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:08.330678  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:08.332935  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:08.548619  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:08.805004  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:08.830441  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:08.831560  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:09.332291  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:09.333139  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:09.333782  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:09.335034  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:09.549087  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:09.805906  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:09.829949  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:09.830348  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:10.049303  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:10.306098  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:10.329817  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:10.330883  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:10.549227  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:10.951479  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:10.951670  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:10.951904  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:11.048505  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:11.306899  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:11.330827  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:11.331176  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:11.549848  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:11.805719  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:11.830262  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:11.830606  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:12.059649  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:12.305971  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:12.329961  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:12.330563  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:12.549966  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:12.804939  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:12.829214  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:12.830837  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:13.048395  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:13.305641  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:13.331438  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:13.331605  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:13.549421  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:13.805919  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:13.831661  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:13.831730  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:14.049399  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:14.306300  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:14.329818  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:14.330774  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:14.552222  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:14.806365  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:14.829698  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:14.831887  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:15.048953  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:15.305618  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:15.330650  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:15.330943  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:15.548777  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:15.806132  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:15.830944  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:15.831352  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:16.052172  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:16.306342  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:16.329653  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:16.330883  749135 kapi.go:107] duration metric: took 37.504199599s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 18:14:16.548598  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:16.805754  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:16.830184  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:17.049843  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:17.383048  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:17.383735  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:17.550278  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:17.806058  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:17.829341  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:18.051596  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:18.306388  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:18.334664  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:18.552534  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:18.806897  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:18.830308  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:19.050045  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:19.306131  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:19.329862  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:19.550696  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:19.807045  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:19.829977  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:20.048666  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:20.306256  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:20.329911  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:20.550226  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:20.806144  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:20.830855  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:21.049583  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:21.310640  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:21.412808  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:21.549653  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:21.805953  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:21.829404  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:22.049850  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:22.315829  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:22.331862  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:22.549120  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:22.806085  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:22.829986  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:23.049654  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:23.306266  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:23.330058  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:23.560251  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:23.807013  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:23.830715  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:24.049404  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:24.306201  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:24.330512  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:24.595031  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:24.806293  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:24.907159  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:25.048965  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:25.305513  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:25.331059  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:25.549920  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:25.805287  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:25.830246  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:26.048992  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:26.306656  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:26.329987  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:26.549698  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:26.808992  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:26.829741  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:27.052649  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:27.312773  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:27.331951  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:27.562526  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:27.805604  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:27.830050  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:28.067172  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:28.306333  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:28.330924  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:28.550567  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:28.807713  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:28.836265  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:29.049440  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:29.305994  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:29.329628  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:29.551265  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:29.807081  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:29.829169  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:30.051607  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:30.308200  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:30.331298  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:30.553108  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:30.822844  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:30.831353  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:31.049853  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:31.305139  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:31.329419  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:31.549350  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:31.806142  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:31.829483  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:32.053013  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:32.306129  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:32.330537  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:32.771680  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:32.806908  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:32.831303  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:33.050163  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:33.305068  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:33.330437  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:33.548440  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:33.806177  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:33.830995  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:34.049496  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:34.310365  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:34.329994  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:34.548907  749135 kapi.go:107] duration metric: took 53.50460724s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 18:14:34.805871  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:34.830222  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:35.306762  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:35.330726  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:35.806453  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:35.830187  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:36.305548  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:36.330510  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:36.806443  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:36.829844  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:37.306287  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:37.330018  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:37.806187  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:37.829944  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:38.306428  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:38.330700  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:38.806275  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:38.830764  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:39.305577  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:39.330471  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:39.806014  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:39.829683  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:40.306572  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:40.329962  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:40.806663  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:40.830402  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:41.305985  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:41.329856  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:41.807066  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:41.829842  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:42.305779  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:42.330575  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:42.805256  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:42.829665  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:43.305345  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:43.329924  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:43.805970  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:43.829619  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:44.305067  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:44.330110  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:44.807165  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:44.832428  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:45.307073  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:45.329430  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:45.807239  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:45.829759  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:46.305795  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:46.330660  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:46.807307  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:46.829950  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:47.306710  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:47.330054  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:47.806495  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:47.830576  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:48.305615  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:48.330601  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:48.805326  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:48.829994  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:49.306221  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:49.330067  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:49.807517  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:49.831847  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:50.312486  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:50.412022  749135 kapi.go:107] duration metric: took 1m11.586419635s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 18:14:50.805525  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:51.306784  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:51.919819  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:52.306451  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:52.809242  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:53.318752  749135 kapi.go:107] duration metric: took 1m10.516788064s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 18:14:53.320395  749135 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-446299 cluster.
	I0920 18:14:53.321854  749135 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 18:14:53.323252  749135 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 18:14:53.324985  749135 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 18:14:53.326283  749135 addons.go:510] duration metric: took 1m23.208765269s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 18:14:53.326342  749135 start.go:246] waiting for cluster config update ...
	I0920 18:14:53.326365  749135 start.go:255] writing updated cluster config ...
	I0920 18:14:53.326710  749135 ssh_runner.go:195] Run: rm -f paused
	I0920 18:14:53.387365  749135 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:14:53.389186  749135 out.go:177] * Done! kubectl is now configured to use "addons-446299" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.872807028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7db147fe-1226-476a-8130-88fc340450d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.873244240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063
eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandboxId:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:C
ONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf21825fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,
CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f2168ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17268560002331561
33,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7db147fe-1226-476a-8130-88fc340450d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.908114329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71ef9a57-ee80-4d62-b3d5-8561caf5f170 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.908189256Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71ef9a57-ee80-4d62-b3d5-8561caf5f170 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.909070609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61e165c9-a606-4abc-ae8c-c3856e3744ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.910585046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857090910560672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61e165c9-a606-4abc-ae8c-c3856e3744ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.911060086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7491761a-9fbb-419b-9229-b4d3d61f5ad2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.911111611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7491761a-9fbb-419b-9229-b4d3d61f5ad2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.911563733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063
eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandboxId:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:C
ONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf21825fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,
CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f2168ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17268560002331561
33,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7491761a-9fbb-419b-9229-b4d3d61f5ad2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.956949756Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab869d37-e754-492e-8f9e-22d6d0f2696e name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.957075080Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab869d37-e754-492e-8f9e-22d6d0f2696e name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.958167783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ce7e755-3ca1-4eba-be64-5be994173b74 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.959266099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857090959239252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ce7e755-3ca1-4eba-be64-5be994173b74 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.959748658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01c6c1d0-4bb5-4e92-8ee0-204280d22aed name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.959810444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01c6c1d0-4bb5-4e92-8ee0-204280d22aed name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:30 addons-446299 crio[659]: time="2024-09-20 18:31:30.960249368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063
eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandboxId:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:C
ONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf21825fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,
CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f2168ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17268560002331561
33,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01c6c1d0-4bb5-4e92-8ee0-204280d22aed name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:31 addons-446299 crio[659]: time="2024-09-20 18:31:31.000818457Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58a56174-99ef-4c32-a632-e8227704c46a name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:31 addons-446299 crio[659]: time="2024-09-20 18:31:31.000910443Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58a56174-99ef-4c32-a632-e8227704c46a name=/runtime.v1.RuntimeService/Version
	Sep 20 18:31:31 addons-446299 crio[659]: time="2024-09-20 18:31:31.001963830Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=84c01f28-354d-47c6-bbdf-d1b2b3d0fdb3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:31 addons-446299 crio[659]: time="2024-09-20 18:31:31.002984275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857091002959691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=84c01f28-354d-47c6-bbdf-d1b2b3d0fdb3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:31:31 addons-446299 crio[659]: time="2024-09-20 18:31:31.003583364Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3a03c6a-38c1-4de1-aef9-e1142bd7f60e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:31 addons-446299 crio[659]: time="2024-09-20 18:31:31.003656749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3a03c6a-38c1-4de1-aef9-e1142bd7f60e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:31 addons-446299 crio[659]: time="2024-09-20 18:31:31.004183325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063
eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandboxId:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:C
ONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf21825fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,
CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f2168ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17268560002331561
33,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3a03c6a-38c1-4de1-aef9-e1142bd7f60e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:31:31 addons-446299 crio[659]: time="2024-09-20 18:31:31.022466796Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=8cca159b-c43a-4069-b931-0fa9ce6d8d2b name=/runtime.v1.RuntimeService/Status
	Sep 20 18:31:31 addons-446299 crio[659]: time="2024-09-20 18:31:31.022544792Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=8cca159b-c43a-4069-b931-0fa9ce6d8d2b name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	7c4b9c3a7c539       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 16 minutes ago      Running             gcp-auth                                 0                   efe0ec0dcbcc2       gcp-auth-89d5ffd79-9scf7
	ba7dc5faa58b7       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             16 minutes ago      Running             controller                               0                   75840320e5280       ingress-nginx-controller-bc57996ff-8kt58
	b094e7c30c796       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          16 minutes ago      Running             csi-snapshotter                          0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	bed98529d363a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          16 minutes ago      Running             csi-provisioner                          0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	69da68d150b2a       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            17 minutes ago      Running             liveness-probe                           0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	fd9ca7a3ca987       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           17 minutes ago      Running             hostpath                                 0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	5a2b6759c0bf9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                17 minutes ago      Running             node-driver-registrar                    0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	66723f0443fe2       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              17 minutes ago      Running             csi-resizer                              0                   00b4d98c29779       csi-hostpath-resizer-0
	c917700eb7747       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             17 minutes ago      Running             csi-attacher                             0                   3ffd6a03ee490       csi-hostpath-attacher-0
	509b6bbf231a9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   17 minutes ago      Running             csi-external-health-monitor-controller   0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	e86a2c89e146b       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             17 minutes ago      Exited              patch                                    1                   a24f9a7c28487       ingress-nginx-admission-patch-2mwr8
	bf44e059a196a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   17 minutes ago      Exited              create                                   0                   1938162f16084       ingress-nginx-admission-create-sdwls
	33f5bce9e468f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      17 minutes ago      Running             volume-snapshot-controller               0                   46ab05da30745       snapshot-controller-56fcc65765-4qwlb
	cbf9321604592       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      17 minutes ago      Running             volume-snapshot-controller               0                   f64e4538489ab       snapshot-controller-56fcc65765-8rk95
	b425ff4f976af       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             17 minutes ago      Running             local-path-provisioner                   0                   a0bef6fd3ee4b       local-path-provisioner-86d989889c-tvbgx
	68195d8abd2e3       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             17 minutes ago      Running             minikube-ingress-dns                     0                   50aa8158427c9       kube-ingress-dns-minikube
	123e17c57dc2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             17 minutes ago      Running             storage-provisioner                      0                   2de8a3616c782       storage-provisioner
	d52dc29cba22a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             17 minutes ago      Running             coredns                                  0                   a7fdf4add17f8       coredns-7c65d6cfc9-8b5fx
	371fb9f89e965       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             17 minutes ago      Running             kube-proxy                               0                   5aa37b64d2a9c       kube-proxy-9pcgb
	730952f4127d6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             18 minutes ago      Running             kube-apiserver                           0                   403b403cdf218       kube-apiserver-addons-446299
	e9e7734f58847       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             18 minutes ago      Running             kube-scheduler                           0                   4306bc0f35baa       kube-scheduler-addons-446299
	a8af18aadd9a1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             18 minutes ago      Running             kube-controller-manager                  0                   859cc747f1c82       kube-controller-manager-addons-446299
	402ab000bdb93       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             18 minutes ago      Running             etcd                                     0                   17de22cbd91b4       etcd-addons-446299
	
	
	==> coredns [d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a] <==
	[INFO] 127.0.0.1:45092 - 31226 "HINFO IN 8537533385009167611.1098357581305743543. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017946303s
	[INFO] 10.244.0.7:50895 - 60070 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000864499s
	[INFO] 10.244.0.7:50895 - 30883 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.004754851s
	[INFO] 10.244.0.7:60479 - 45291 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000276551s
	[INFO] 10.244.0.7:60479 - 60648 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000259587s
	[INFO] 10.244.0.7:34337 - 50221 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103649s
	[INFO] 10.244.0.7:34337 - 3119 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000190818s
	[INFO] 10.244.0.7:50579 - 48699 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149541s
	[INFO] 10.244.0.7:50579 - 13882 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00029954s
	[INFO] 10.244.0.7:52674 - 19194 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100903s
	[INFO] 10.244.0.7:52674 - 48616 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131897s
	[INFO] 10.244.0.7:34842 - 24908 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052174s
	[INFO] 10.244.0.7:34842 - 17742 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131345s
	[INFO] 10.244.0.7:58542 - 36156 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047177s
	[INFO] 10.244.0.7:58542 - 62014 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148973s
	[INFO] 10.244.0.7:34082 - 14251 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000145316s
	[INFO] 10.244.0.7:34082 - 45485 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000238133s
	[INFO] 10.244.0.21:56997 - 31030 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000537673s
	[INFO] 10.244.0.21:35720 - 34441 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147988s
	[INFO] 10.244.0.21:53795 - 23425 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001554s
	[INFO] 10.244.0.21:58869 - 385 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122258s
	[INFO] 10.244.0.21:37326 - 35127 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00023415s
	[INFO] 10.244.0.21:35448 - 47752 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126595s
	[INFO] 10.244.0.21:41454 - 25870 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003639103s
	[INFO] 10.244.0.21:51708 - 51164 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00402176s
	
	
	==> describe nodes <==
	Name:               addons-446299
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-446299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-446299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_13_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-446299
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-446299"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:13:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-446299
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:31:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:28:34 +0000   Fri, 20 Sep 2024 18:13:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:28:34 +0000   Fri, 20 Sep 2024 18:13:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:28:34 +0000   Fri, 20 Sep 2024 18:13:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:28:34 +0000   Fri, 20 Sep 2024 18:13:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    addons-446299
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b51819720d24a4988f4faf5cbed4e8f
	  System UUID:                6b518197-20d2-4a49-88f4-faf5cbed4e8f
	  Boot ID:                    431228fc-f5a8-4282-bf7e-10c36798659f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m54s
	  gcp-auth                    gcp-auth-89d5ffd79-9scf7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8kt58    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-8b5fx                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     18m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 csi-hostpathplugin-fcmx5                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-addons-446299                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         18m
	  kube-system                 kube-apiserver-addons-446299                250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-addons-446299       200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-9pcgb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-addons-446299                100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 snapshot-controller-56fcc65765-4qwlb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-56fcc65765-8rk95        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  local-path-storage          local-path-provisioner-86d989889c-tvbgx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node addons-446299 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node addons-446299 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node addons-446299 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m   kubelet          Node addons-446299 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node addons-446299 event: Registered Node addons-446299 in Controller
	
	
	==> dmesg <==
	[  +5.305303] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
	[  +0.141616] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.046436] kauditd_printk_skb: 135 callbacks suppressed
	[  +5.120665] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.997269] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.458196] kauditd_printk_skb: 5 callbacks suppressed
	[Sep20 18:14] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.706525] kauditd_printk_skb: 34 callbacks suppressed
	[ +16.244583] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.135040] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.940354] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.767745] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.007018] kauditd_printk_skb: 48 callbacks suppressed
	[Sep20 18:15] kauditd_printk_skb: 10 callbacks suppressed
	[Sep20 18:16] kauditd_printk_skb: 30 callbacks suppressed
	[Sep20 18:17] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:20] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:22] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:23] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.877503] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.382620] kauditd_printk_skb: 41 callbacks suppressed
	[  +8.681981] kauditd_printk_skb: 39 callbacks suppressed
	[ +13.570039] kauditd_printk_skb: 14 callbacks suppressed
	[Sep20 18:24] kauditd_printk_skb: 2 callbacks suppressed
	[ +30.180557] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551] <==
	{"level":"warn","ts":"2024-09-20T18:14:32.753338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.730876ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:32.753372Z","caller":"traceutil/trace.go:171","msg":"trace[1542998802] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1058; }","duration":"340.769961ms","start":"2024-09-20T18:14:32.412597Z","end":"2024-09-20T18:14:32.753367Z","steps":["trace[1542998802] 'agreement among raft nodes before linearized reading'  (duration: 340.724283ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:32.753846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.265355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:32.753903Z","caller":"traceutil/trace.go:171","msg":"trace[581069886] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1058; }","duration":"217.327931ms","start":"2024-09-20T18:14:32.536567Z","end":"2024-09-20T18:14:32.753895Z","steps":["trace[581069886] 'agreement among raft nodes before linearized reading'  (duration: 217.246138ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:51.903628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.538818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-20T18:14:51.904065Z","caller":"traceutil/trace.go:171","msg":"trace[2043860769] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1117; }","duration":"144.082045ms","start":"2024-09-20T18:14:51.759954Z","end":"2024-09-20T18:14:51.904036Z","steps":["trace[2043860769] 'count revisions from in-memory index tree'  (duration: 143.478073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:51.904831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.923374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:51.904891Z","caller":"traceutil/trace.go:171","msg":"trace[386261722] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"111.005288ms","start":"2024-09-20T18:14:51.793876Z","end":"2024-09-20T18:14:51.904881Z","steps":["trace[386261722] 'range keys from in-memory index tree'  (duration: 110.882796ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:23:04.403949Z","caller":"traceutil/trace.go:171","msg":"trace[1232773900] linearizableReadLoop","detail":"{readStateIndex:2064; appliedIndex:2063; }","duration":"137.955638ms","start":"2024-09-20T18:23:04.265959Z","end":"2024-09-20T18:23:04.403914Z","steps":["trace[1232773900] 'read index received'  (duration: 137.83631ms)","trace[1232773900] 'applied index is now lower than readState.Index'  (duration: 118.922µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:23:04.404190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.160514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:23:04.404218Z","caller":"traceutil/trace.go:171","msg":"trace[1586547199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1925; }","duration":"138.254725ms","start":"2024-09-20T18:23:04.265955Z","end":"2024-09-20T18:23:04.404210Z","steps":["trace[1586547199] 'agreement among raft nodes before linearized reading'  (duration: 138.105756ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:23:04.404422Z","caller":"traceutil/trace.go:171","msg":"trace[700372140] transaction","detail":"{read_only:false; response_revision:1925; number_of_response:1; }","duration":"379.764994ms","start":"2024-09-20T18:23:04.024645Z","end":"2024-09-20T18:23:04.404410Z","steps":["trace[700372140] 'process raft request'  (duration: 379.19458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:23:04.404517Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:23:04.024622Z","time spent":"379.814521ms","remote":"127.0.0.1:36928","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1924 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-20T18:23:21.256394Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1506}
	{"level":"info","ts":"2024-09-20T18:23:21.288238Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1506,"took":"31.314726ms","hash":517065302,"current-db-size-bytes":7016448,"current-db-size":"7.0 MB","current-db-size-in-use-bytes":4055040,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2024-09-20T18:23:21.288299Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":517065302,"revision":1506,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T18:23:22.430993Z","caller":"traceutil/trace.go:171","msg":"trace[200479020] transaction","detail":"{read_only:false; response_revision:2108; number_of_response:1; }","duration":"314.888557ms","start":"2024-09-20T18:23:22.116093Z","end":"2024-09-20T18:23:22.430981Z","steps":["trace[200479020] 'process raft request'  (duration: 314.552392ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:23:22.431107Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:23:22.116078Z","time spent":"314.951125ms","remote":"127.0.0.1:37058","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2038 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:425 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-09-20T18:23:23.865254Z","caller":"traceutil/trace.go:171","msg":"trace[102178879] linearizableReadLoop","detail":"{readStateIndex:2258; appliedIndex:2257; }","duration":"203.488059ms","start":"2024-09-20T18:23:23.661753Z","end":"2024-09-20T18:23:23.865241Z","steps":["trace[102178879] 'read index received'  (duration: 203.347953ms)","trace[102178879] 'applied index is now lower than readState.Index'  (duration: 139.623µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:23:23.865357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.585815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:23:23.865380Z","caller":"traceutil/trace.go:171","msg":"trace[1945616439] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2110; }","duration":"203.624964ms","start":"2024-09-20T18:23:23.661749Z","end":"2024-09-20T18:23:23.865374Z","steps":["trace[1945616439] 'agreement among raft nodes before linearized reading'  (duration: 203.546895ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:23:23.865639Z","caller":"traceutil/trace.go:171","msg":"trace[1429413700] transaction","detail":"{read_only:false; response_revision:2110; number_of_response:1; }","duration":"210.845365ms","start":"2024-09-20T18:23:23.654785Z","end":"2024-09-20T18:23:23.865631Z","steps":["trace[1429413700] 'process raft request'  (duration: 210.352466ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:28:21.262984Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2106}
	{"level":"info","ts":"2024-09-20T18:28:21.285870Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2106,"took":"22.302077ms","hash":3491567488,"current-db-size-bytes":7016448,"current-db-size":"7.0 MB","current-db-size-in-use-bytes":3915776,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-20T18:28:21.285936Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3491567488,"revision":2106,"compact-revision":1506}
	
	
	==> gcp-auth [7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228] <==
	2024/09/20 18:14:53 Ready to write response ...
	2024/09/20 18:14:55 Ready to marshal response ...
	2024/09/20 18:14:55 Ready to write response ...
	2024/09/20 18:14:55 Ready to marshal response ...
	2024/09/20 18:14:55 Ready to write response ...
	2024/09/20 18:22:59 Ready to marshal response ...
	2024/09/20 18:22:59 Ready to write response ...
	2024/09/20 18:22:59 Ready to marshal response ...
	2024/09/20 18:22:59 Ready to write response ...
	2024/09/20 18:22:59 Ready to marshal response ...
	2024/09/20 18:22:59 Ready to write response ...
	2024/09/20 18:23:05 Ready to marshal response ...
	2024/09/20 18:23:05 Ready to write response ...
	2024/09/20 18:23:05 Ready to marshal response ...
	2024/09/20 18:23:05 Ready to write response ...
	2024/09/20 18:23:10 Ready to marshal response ...
	2024/09/20 18:23:10 Ready to write response ...
	2024/09/20 18:23:15 Ready to marshal response ...
	2024/09/20 18:23:15 Ready to write response ...
	2024/09/20 18:23:18 Ready to marshal response ...
	2024/09/20 18:23:18 Ready to write response ...
	2024/09/20 18:23:29 Ready to marshal response ...
	2024/09/20 18:23:29 Ready to write response ...
	2024/09/20 18:23:37 Ready to marshal response ...
	2024/09/20 18:23:37 Ready to write response ...
	
	
	==> kernel <==
	 18:31:31 up 18 min,  0 users,  load average: 0.05, 0.16, 0.23
	Linux addons-446299 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c] <==
	 > logger="UnhandledError"
	W0920 18:15:27.823202       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:15:27.823313       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 18:15:27.823420       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:15:27.823588       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:15:27.824490       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:15:27.825326       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0920 18:15:31.828151       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.147.48:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.147.48:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	W0920 18:15:31.828390       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:15:31.828450       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:15:31.847786       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0920 18:15:31.853561       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0920 18:22:59.185908       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.29.221"}
	I0920 18:23:23.918494       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 18:23:25.009930       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 18:23:29.482103       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 18:23:29.675487       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.190.241"}
	I0920 18:23:30.728395       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 18:28:32.892900       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e] <==
	I0920 18:24:11.155220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.255µs"
	W0920 18:24:29.364098       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:24:29.364246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:25:01.947190       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:25:01.947288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:25:33.105344       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:25:33.105500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:26:14.610422       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:26:14.610571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:27:08.759968       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:27:08.760083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:27:45.244240       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:27:45.244314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:28:08.201785       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="9.76µs"
	W0920 18:28:30.076776       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:28:30.076841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:28:34.422130       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-446299"
	W0920 18:29:28.507680       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:29:28.507846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:29:59.665047       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:29:59.665123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:30:30.232079       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:30:30.232218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:31:01.920623       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:31:01.920853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:13:32.095684       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:13:32.111185       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.237"]
	E0920 18:13:32.111246       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:13:32.254832       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:13:32.254884       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:13:32.254908       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:13:32.262039       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:13:32.262450       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:13:32.262484       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:13:32.268397       1 config.go:199] "Starting service config controller"
	I0920 18:13:32.268443       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:13:32.268473       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:13:32.268477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:13:32.268988       1 config.go:328] "Starting node config controller"
	I0920 18:13:32.268994       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:13:32.368877       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:13:32.368886       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:13:32.369073       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072] <==
	W0920 18:13:22.809246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 18:13:22.809282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.809585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:13:22.809621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.813253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:13:22.813298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.813377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:13:22.813413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.813464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:13:22.813478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.815129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:13:22.815174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.637031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 18:13:23.637068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.746262       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:13:23.746361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.943434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:13:23.943536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.956043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:13:23.956129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.968884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:13:23.969017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:24.340405       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:13:24.340516       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 18:13:27.096843       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:30:45 addons-446299 kubelet[1199]: E0920 18:30:45.875031    1199 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 20 18:30:45 addons-446299 kubelet[1199]: E0920 18:30:45.875144    1199 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveRea
dOnly:nil,},VolumeMount{Name:kube-api-access-zzgp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod-restore_default(c0105316-5ff3-4ccd-8862-0a9a1965982f): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 20 18:30:45 addons-446299 kubelet[1199]: E0920 18:30:45.876450    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="c0105316-5ff3-4ccd-8862-0a9a1965982f"
	Sep 20 18:30:54 addons-446299 kubelet[1199]: E0920 18:30:54.167016    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="e00699c2-7689-43aa-9a79-f6b8682fbe91"
	Sep 20 18:30:55 addons-446299 kubelet[1199]: E0920 18:30:55.641282    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857055640954941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:30:55 addons-446299 kubelet[1199]: E0920 18:30:55.641345    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857055640954941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:30:59 addons-446299 kubelet[1199]: E0920 18:30:59.166608    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
	Sep 20 18:30:59 addons-446299 kubelet[1199]: E0920 18:30:59.167183    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="c0105316-5ff3-4ccd-8862-0a9a1965982f"
	Sep 20 18:31:05 addons-446299 kubelet[1199]: E0920 18:31:05.644094    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857065643737280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:05 addons-446299 kubelet[1199]: E0920 18:31:05.644365    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857065643737280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:08 addons-446299 kubelet[1199]: E0920 18:31:08.166058    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="e00699c2-7689-43aa-9a79-f6b8682fbe91"
	Sep 20 18:31:11 addons-446299 kubelet[1199]: E0920 18:31:11.167183    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="c0105316-5ff3-4ccd-8862-0a9a1965982f"
	Sep 20 18:31:12 addons-446299 kubelet[1199]: E0920 18:31:12.166087    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
	Sep 20 18:31:15 addons-446299 kubelet[1199]: E0920 18:31:15.646991    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857075646520696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:15 addons-446299 kubelet[1199]: E0920 18:31:15.647276    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857075646520696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:20 addons-446299 kubelet[1199]: E0920 18:31:20.166262    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="e00699c2-7689-43aa-9a79-f6b8682fbe91"
	Sep 20 18:31:22 addons-446299 kubelet[1199]: E0920 18:31:22.168509    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="c0105316-5ff3-4ccd-8862-0a9a1965982f"
	Sep 20 18:31:25 addons-446299 kubelet[1199]: E0920 18:31:25.166193    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
	Sep 20 18:31:25 addons-446299 kubelet[1199]: E0920 18:31:25.207684    1199 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:31:25 addons-446299 kubelet[1199]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:31:25 addons-446299 kubelet[1199]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:31:25 addons-446299 kubelet[1199]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:31:25 addons-446299 kubelet[1199]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:31:25 addons-446299 kubelet[1199]: E0920 18:31:25.649901    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857085649354696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:31:25 addons-446299 kubelet[1199]: E0920 18:31:25.650031    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857085649354696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0] <==
	I0920 18:13:37.673799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:13:37.889195       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:13:37.889268       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:13:37.991169       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:13:37.991374       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8!
	I0920 18:13:37.992328       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e2a2b2a-26e5-43f5-ad91-442df4e21dfd", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8 became leader
	I0920 18:13:38.191750       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-446299 -n addons-446299
helpers_test.go:261: (dbg) Run:  kubectl --context addons-446299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-446299 describe pod busybox nginx task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-446299 describe pod busybox nginx task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8: exit status 1 (90.021468ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-446299/192.168.39.237
	Start Time:       Fri, 20 Sep 2024 18:14:55 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6l6f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s6l6f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  16m                 default-scheduler  Successfully assigned default/busybox to addons-446299
	  Normal   Pulling    15m (x4 over 16m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     15m (x4 over 16m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     15m (x4 over 16m)   kubelet            Error: ErrImagePull
	  Warning  Failed     14m (x6 over 16m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    84s (x61 over 16m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-446299/192.168.39.237
	Start Time:       Fri, 20 Sep 2024 18:23:29 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zg4g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8zg4g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/nginx to addons-446299
	  Warning  Failed     7m31s                  kubelet            Failed to pull image "docker.io/nginx:alpine": copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m58s (x2 over 6m30s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m8s (x4 over 8m2s)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m23s (x4 over 7m31s)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m23s                  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m58s (x7 over 7m30s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m58s (x7 over 7m30s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-446299/192.168.39.237
	Start Time:       Fri, 20 Sep 2024 18:23:37 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zzgp9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-zzgp9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  7m55s                  default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-446299
	  Warning  Failed     5m29s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m37s (x4 over 7m54s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m53s (x3 over 7m)     kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m53s (x4 over 7m)     kubelet            Error: ErrImagePull
	  Normal   BackOff    2m25s (x7 over 7m)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m25s (x7 over 7m)     kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sdwls" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2mwr8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-446299 describe pod busybox nginx task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8: exit status 1
--- FAIL: TestAddons/parallel/Ingress (482.99s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (294.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.571744ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-dgfgh" [84513540-b090-4d24-b6e0-9ed764434018] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003676494s
addons_test.go:413: (dbg) Run:  kubectl --context addons-446299 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-446299 top pods -n kube-system: exit status 1 (70.776922ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8b5fx, age: 9m51.157869724s

                                                
                                                
** /stderr **
I0920 18:23:21.160229  748497 retry.go:31] will retry after 2.135285234s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-446299 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-446299 top pods -n kube-system: exit status 1 (70.719435ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8b5fx, age: 9m53.364364778s

                                                
                                                
** /stderr **
I0920 18:23:23.366545  748497 retry.go:31] will retry after 3.713638444s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-446299 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-446299 top pods -n kube-system: exit status 1 (67.468388ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8b5fx, age: 9m57.146533251s

                                                
                                                
** /stderr **
I0920 18:23:27.148790  748497 retry.go:31] will retry after 6.582337688s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-446299 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-446299 top pods -n kube-system: exit status 1 (78.771836ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8b5fx, age: 10m3.808039226s

                                                
                                                
** /stderr **
I0920 18:23:33.810483  748497 retry.go:31] will retry after 10.711147129s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-446299 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-446299 top pods -n kube-system: exit status 1 (68.183174ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8b5fx, age: 10m14.588381477s

                                                
                                                
** /stderr **
I0920 18:23:44.590598  748497 retry.go:31] will retry after 14.980493665s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-446299 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-446299 top pods -n kube-system: exit status 1 (100.902077ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8b5fx, age: 10m29.671141225s

                                                
                                                
** /stderr **
I0920 18:23:59.673311  748497 retry.go:31] will retry after 32.448000155s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-446299 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-446299 top pods -n kube-system: exit status 1 (68.910432ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8b5fx, age: 11m2.187960115s

                                                
                                                
** /stderr **
I0920 18:24:32.190538  748497 retry.go:31] will retry after 30.797248425s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-446299 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-446299 top pods -n kube-system: exit status 1 (74.365171ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8b5fx, age: 11m33.059868847s

                                                
                                                
** /stderr **
I0920 18:25:03.062637  748497 retry.go:31] will retry after 26.45691582s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-446299 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-446299 top pods -n kube-system: exit status 1 (66.819442ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8b5fx, age: 11m59.584440594s

                                                
                                                
** /stderr **
I0920 18:25:29.586880  748497 retry.go:31] will retry after 33.816577517s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-446299 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-446299 top pods -n kube-system: exit status 1 (69.759648ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8b5fx, age: 12m33.471887241s

                                                
                                                
** /stderr **
I0920 18:26:03.474338  748497 retry.go:31] will retry after 43.374945446s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-446299 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-446299 top pods -n kube-system: exit status 1 (69.613433ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8b5fx, age: 13m16.916950847s

                                                
                                                
** /stderr **
I0920 18:26:46.919745  748497 retry.go:31] will retry after 1m20.734716764s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-446299 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-446299 top pods -n kube-system: exit status 1 (70.40198ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8b5fx, age: 14m37.72256726s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-446299 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-446299 -n addons-446299
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-446299 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-446299 logs -n 25: (1.462625636s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | -p download-only-675466                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-675466                                                                     | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| start   | -o=json --download-only                                                                     | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | -p download-only-363869                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-363869                                                                     | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-675466                                                                     | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-363869                                                                     | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-747965 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | binary-mirror-747965                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39359                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-747965                                                                     | binary-mirror-747965 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| addons  | enable dashboard -p                                                                         | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-446299 --wait=true                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:22 UTC | 20 Sep 24 18:22 UTC |
	|         | -p addons-446299                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | -p addons-446299                                                                            |                      |         |         |                     |                     |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-446299 ssh cat                                                                       | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | /opt/local-path-provisioner/pvc-11168afa-d97c-4581-90a8-f19b354e2c35_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| ip      | addons-446299 ip                                                                            | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:24 UTC | 20 Sep 24 18:24 UTC |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:24 UTC | 20 Sep 24 18:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-446299 addons                                                                        | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:12:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:12:45.452837  749135 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:12:45.452957  749135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:45.452966  749135 out.go:358] Setting ErrFile to fd 2...
	I0920 18:12:45.452970  749135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:45.453156  749135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:12:45.453777  749135 out.go:352] Setting JSON to false
	I0920 18:12:45.454793  749135 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6915,"bootTime":1726849050,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:12:45.454907  749135 start.go:139] virtualization: kvm guest
	I0920 18:12:45.457071  749135 out.go:177] * [addons-446299] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:12:45.458344  749135 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:12:45.458335  749135 notify.go:220] Checking for updates...
	I0920 18:12:45.459761  749135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:12:45.461106  749135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:12:45.462449  749135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:45.463737  749135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:12:45.465084  749135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:12:45.466379  749135 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:12:45.497434  749135 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:12:45.498519  749135 start.go:297] selected driver: kvm2
	I0920 18:12:45.498542  749135 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:12:45.498561  749135 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:12:45.499322  749135 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:12:45.499411  749135 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:12:45.513921  749135 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:12:45.513966  749135 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:12:45.514272  749135 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:12:45.514314  749135 cni.go:84] Creating CNI manager for ""
	I0920 18:12:45.514372  749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:12:45.514386  749135 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:12:45.514458  749135 start.go:340] cluster config:
	{Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:12:45.514600  749135 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:12:45.516315  749135 out.go:177] * Starting "addons-446299" primary control-plane node in "addons-446299" cluster
	I0920 18:12:45.517423  749135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:12:45.517447  749135 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:12:45.517459  749135 cache.go:56] Caching tarball of preloaded images
	I0920 18:12:45.517543  749135 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:12:45.517552  749135 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:12:45.517857  749135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json ...
	I0920 18:12:45.517880  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json: {Name:mkaa7e3a2b8a2d95cecdc721e4fd7f5310773e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:12:45.518032  749135 start.go:360] acquireMachinesLock for addons-446299: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:12:45.518095  749135 start.go:364] duration metric: took 46.763µs to acquireMachinesLock for "addons-446299"
	I0920 18:12:45.518131  749135 start.go:93] Provisioning new machine with config: &{Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:12:45.518195  749135 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:12:45.520537  749135 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 18:12:45.520688  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:12:45.520727  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:45.535639  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0920 18:12:45.536170  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:45.536786  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:12:45.536808  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:45.537162  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:45.537383  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:12:45.537540  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:12:45.537694  749135 start.go:159] libmachine.API.Create for "addons-446299" (driver="kvm2")
	I0920 18:12:45.537726  749135 client.go:168] LocalClient.Create starting
	I0920 18:12:45.537791  749135 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:12:45.635672  749135 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:12:45.854167  749135 main.go:141] libmachine: Running pre-create checks...
	I0920 18:12:45.854195  749135 main.go:141] libmachine: (addons-446299) Calling .PreCreateCheck
	I0920 18:12:45.854768  749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
	I0920 18:12:45.855238  749135 main.go:141] libmachine: Creating machine...
	I0920 18:12:45.855256  749135 main.go:141] libmachine: (addons-446299) Calling .Create
	I0920 18:12:45.855444  749135 main.go:141] libmachine: (addons-446299) Creating KVM machine...
	I0920 18:12:45.856800  749135 main.go:141] libmachine: (addons-446299) DBG | found existing default KVM network
	I0920 18:12:45.857584  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:45.857437  749157 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bb0}
	I0920 18:12:45.857661  749135 main.go:141] libmachine: (addons-446299) DBG | created network xml: 
	I0920 18:12:45.857685  749135 main.go:141] libmachine: (addons-446299) DBG | <network>
	I0920 18:12:45.857700  749135 main.go:141] libmachine: (addons-446299) DBG |   <name>mk-addons-446299</name>
	I0920 18:12:45.857710  749135 main.go:141] libmachine: (addons-446299) DBG |   <dns enable='no'/>
	I0920 18:12:45.857722  749135 main.go:141] libmachine: (addons-446299) DBG |   
	I0920 18:12:45.857736  749135 main.go:141] libmachine: (addons-446299) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:12:45.857749  749135 main.go:141] libmachine: (addons-446299) DBG |     <dhcp>
	I0920 18:12:45.857762  749135 main.go:141] libmachine: (addons-446299) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:12:45.857774  749135 main.go:141] libmachine: (addons-446299) DBG |     </dhcp>
	I0920 18:12:45.857784  749135 main.go:141] libmachine: (addons-446299) DBG |   </ip>
	I0920 18:12:45.857795  749135 main.go:141] libmachine: (addons-446299) DBG |   
	I0920 18:12:45.857805  749135 main.go:141] libmachine: (addons-446299) DBG | </network>
	I0920 18:12:45.857817  749135 main.go:141] libmachine: (addons-446299) DBG | 
	I0920 18:12:45.862810  749135 main.go:141] libmachine: (addons-446299) DBG | trying to create private KVM network mk-addons-446299 192.168.39.0/24...
	I0920 18:12:45.928127  749135 main.go:141] libmachine: (addons-446299) DBG | private KVM network mk-addons-446299 192.168.39.0/24 created
	I0920 18:12:45.928216  749135 main.go:141] libmachine: (addons-446299) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 ...
	I0920 18:12:45.928243  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:45.928106  749157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:45.928255  749135 main.go:141] libmachine: (addons-446299) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:12:45.928282  749135 main.go:141] libmachine: (addons-446299) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:12:46.198371  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.198204  749157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa...
	I0920 18:12:46.306630  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.306482  749157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/addons-446299.rawdisk...
	I0920 18:12:46.306662  749135 main.go:141] libmachine: (addons-446299) DBG | Writing magic tar header
	I0920 18:12:46.306673  749135 main.go:141] libmachine: (addons-446299) DBG | Writing SSH key tar header
	I0920 18:12:46.306681  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.306605  749157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 ...
	I0920 18:12:46.306695  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299
	I0920 18:12:46.306758  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 (perms=drwx------)
	I0920 18:12:46.306798  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:12:46.306816  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:12:46.306825  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:12:46.306839  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:46.306872  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:12:46.306884  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:12:46.306904  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:12:46.306929  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:12:46.306939  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:12:46.306952  749135 main.go:141] libmachine: (addons-446299) Creating domain...
	I0920 18:12:46.306963  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:12:46.306969  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home
	I0920 18:12:46.306976  749135 main.go:141] libmachine: (addons-446299) DBG | Skipping /home - not owner
	I0920 18:12:46.308063  749135 main.go:141] libmachine: (addons-446299) define libvirt domain using xml: 
	I0920 18:12:46.308090  749135 main.go:141] libmachine: (addons-446299) <domain type='kvm'>
	I0920 18:12:46.308100  749135 main.go:141] libmachine: (addons-446299)   <name>addons-446299</name>
	I0920 18:12:46.308107  749135 main.go:141] libmachine: (addons-446299)   <memory unit='MiB'>4000</memory>
	I0920 18:12:46.308114  749135 main.go:141] libmachine: (addons-446299)   <vcpu>2</vcpu>
	I0920 18:12:46.308128  749135 main.go:141] libmachine: (addons-446299)   <features>
	I0920 18:12:46.308136  749135 main.go:141] libmachine: (addons-446299)     <acpi/>
	I0920 18:12:46.308144  749135 main.go:141] libmachine: (addons-446299)     <apic/>
	I0920 18:12:46.308150  749135 main.go:141] libmachine: (addons-446299)     <pae/>
	I0920 18:12:46.308156  749135 main.go:141] libmachine: (addons-446299)     
	I0920 18:12:46.308161  749135 main.go:141] libmachine: (addons-446299)   </features>
	I0920 18:12:46.308167  749135 main.go:141] libmachine: (addons-446299)   <cpu mode='host-passthrough'>
	I0920 18:12:46.308172  749135 main.go:141] libmachine: (addons-446299)   
	I0920 18:12:46.308184  749135 main.go:141] libmachine: (addons-446299)   </cpu>
	I0920 18:12:46.308194  749135 main.go:141] libmachine: (addons-446299)   <os>
	I0920 18:12:46.308203  749135 main.go:141] libmachine: (addons-446299)     <type>hvm</type>
	I0920 18:12:46.308221  749135 main.go:141] libmachine: (addons-446299)     <boot dev='cdrom'/>
	I0920 18:12:46.308234  749135 main.go:141] libmachine: (addons-446299)     <boot dev='hd'/>
	I0920 18:12:46.308243  749135 main.go:141] libmachine: (addons-446299)     <bootmenu enable='no'/>
	I0920 18:12:46.308250  749135 main.go:141] libmachine: (addons-446299)   </os>
	I0920 18:12:46.308255  749135 main.go:141] libmachine: (addons-446299)   <devices>
	I0920 18:12:46.308262  749135 main.go:141] libmachine: (addons-446299)     <disk type='file' device='cdrom'>
	I0920 18:12:46.308277  749135 main.go:141] libmachine: (addons-446299)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/boot2docker.iso'/>
	I0920 18:12:46.308290  749135 main.go:141] libmachine: (addons-446299)       <target dev='hdc' bus='scsi'/>
	I0920 18:12:46.308302  749135 main.go:141] libmachine: (addons-446299)       <readonly/>
	I0920 18:12:46.308312  749135 main.go:141] libmachine: (addons-446299)     </disk>
	I0920 18:12:46.308324  749135 main.go:141] libmachine: (addons-446299)     <disk type='file' device='disk'>
	I0920 18:12:46.308335  749135 main.go:141] libmachine: (addons-446299)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:12:46.308350  749135 main.go:141] libmachine: (addons-446299)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/addons-446299.rawdisk'/>
	I0920 18:12:46.308364  749135 main.go:141] libmachine: (addons-446299)       <target dev='hda' bus='virtio'/>
	I0920 18:12:46.308376  749135 main.go:141] libmachine: (addons-446299)     </disk>
	I0920 18:12:46.308386  749135 main.go:141] libmachine: (addons-446299)     <interface type='network'>
	I0920 18:12:46.308395  749135 main.go:141] libmachine: (addons-446299)       <source network='mk-addons-446299'/>
	I0920 18:12:46.308404  749135 main.go:141] libmachine: (addons-446299)       <model type='virtio'/>
	I0920 18:12:46.308414  749135 main.go:141] libmachine: (addons-446299)     </interface>
	I0920 18:12:46.308424  749135 main.go:141] libmachine: (addons-446299)     <interface type='network'>
	I0920 18:12:46.308440  749135 main.go:141] libmachine: (addons-446299)       <source network='default'/>
	I0920 18:12:46.308454  749135 main.go:141] libmachine: (addons-446299)       <model type='virtio'/>
	I0920 18:12:46.308462  749135 main.go:141] libmachine: (addons-446299)     </interface>
	I0920 18:12:46.308467  749135 main.go:141] libmachine: (addons-446299)     <serial type='pty'>
	I0920 18:12:46.308472  749135 main.go:141] libmachine: (addons-446299)       <target port='0'/>
	I0920 18:12:46.308478  749135 main.go:141] libmachine: (addons-446299)     </serial>
	I0920 18:12:46.308486  749135 main.go:141] libmachine: (addons-446299)     <console type='pty'>
	I0920 18:12:46.308493  749135 main.go:141] libmachine: (addons-446299)       <target type='serial' port='0'/>
	I0920 18:12:46.308498  749135 main.go:141] libmachine: (addons-446299)     </console>
	I0920 18:12:46.308504  749135 main.go:141] libmachine: (addons-446299)     <rng model='virtio'>
	I0920 18:12:46.308512  749135 main.go:141] libmachine: (addons-446299)       <backend model='random'>/dev/random</backend>
	I0920 18:12:46.308518  749135 main.go:141] libmachine: (addons-446299)     </rng>
	I0920 18:12:46.308522  749135 main.go:141] libmachine: (addons-446299)     
	I0920 18:12:46.308528  749135 main.go:141] libmachine: (addons-446299)     
	I0920 18:12:46.308544  749135 main.go:141] libmachine: (addons-446299)   </devices>
	I0920 18:12:46.308556  749135 main.go:141] libmachine: (addons-446299) </domain>
	I0920 18:12:46.308574  749135 main.go:141] libmachine: (addons-446299) 
	I0920 18:12:46.314191  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:13:6e:16 in network default
	I0920 18:12:46.314696  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:46.314712  749135 main.go:141] libmachine: (addons-446299) Ensuring networks are active...
	I0920 18:12:46.315254  749135 main.go:141] libmachine: (addons-446299) Ensuring network default is active
	I0920 18:12:46.315494  749135 main.go:141] libmachine: (addons-446299) Ensuring network mk-addons-446299 is active
	I0920 18:12:46.315890  749135 main.go:141] libmachine: (addons-446299) Getting domain xml...
	I0920 18:12:46.316428  749135 main.go:141] libmachine: (addons-446299) Creating domain...
	I0920 18:12:47.702575  749135 main.go:141] libmachine: (addons-446299) Waiting to get IP...
	I0920 18:12:47.703586  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:47.704120  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:47.704148  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:47.704086  749157 retry.go:31] will retry after 271.659022ms: waiting for machine to come up
	I0920 18:12:47.977759  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:47.978244  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:47.978271  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:47.978199  749157 retry.go:31] will retry after 286.269777ms: waiting for machine to come up
	I0920 18:12:48.265706  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:48.266154  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:48.266176  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:48.266104  749157 retry.go:31] will retry after 302.528012ms: waiting for machine to come up
	I0920 18:12:48.570875  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:48.571362  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:48.571386  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:48.571312  749157 retry.go:31] will retry after 579.846713ms: waiting for machine to come up
	I0920 18:12:49.153045  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:49.153478  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:49.153506  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:49.153418  749157 retry.go:31] will retry after 501.770816ms: waiting for machine to come up
	I0920 18:12:49.657032  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:49.657383  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:49.657410  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:49.657355  749157 retry.go:31] will retry after 903.967154ms: waiting for machine to come up
	I0920 18:12:50.562781  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:50.563350  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:50.563375  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:50.563286  749157 retry.go:31] will retry after 1.03177474s: waiting for machine to come up
	I0920 18:12:51.596424  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:51.596850  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:51.596971  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:51.596890  749157 retry.go:31] will retry after 1.278733336s: waiting for machine to come up
	I0920 18:12:52.877368  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:52.877732  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:52.877761  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:52.877690  749157 retry.go:31] will retry after 1.241144447s: waiting for machine to come up
	I0920 18:12:54.121228  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:54.121598  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:54.121623  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:54.121564  749157 retry.go:31] will retry after 2.253509148s: waiting for machine to come up
	I0920 18:12:56.377139  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:56.377598  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:56.377630  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:56.377537  749157 retry.go:31] will retry after 2.563830681s: waiting for machine to come up
	I0920 18:12:58.944264  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:58.944679  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:58.944723  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:58.944624  749157 retry.go:31] will retry after 2.392098661s: waiting for machine to come up
	I0920 18:13:01.339634  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:01.340032  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:13:01.340088  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:13:01.339990  749157 retry.go:31] will retry after 2.800869076s: waiting for machine to come up
	I0920 18:13:04.142006  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:04.142476  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:13:04.142500  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:13:04.142411  749157 retry.go:31] will retry after 4.101773144s: waiting for machine to come up
	I0920 18:13:08.247401  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.247831  749135 main.go:141] libmachine: (addons-446299) Found IP for machine: 192.168.39.237
	I0920 18:13:08.247867  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has current primary IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.247875  749135 main.go:141] libmachine: (addons-446299) Reserving static IP address...
	I0920 18:13:08.248197  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find host DHCP lease matching {name: "addons-446299", mac: "52:54:00:33:9c:3e", ip: "192.168.39.237"} in network mk-addons-446299
	I0920 18:13:08.320366  749135 main.go:141] libmachine: (addons-446299) DBG | Getting to WaitForSSH function...
	I0920 18:13:08.320400  749135 main.go:141] libmachine: (addons-446299) Reserved static IP address: 192.168.39.237
	I0920 18:13:08.320413  749135 main.go:141] libmachine: (addons-446299) Waiting for SSH to be available...
	I0920 18:13:08.323450  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.323840  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.323876  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.324043  749135 main.go:141] libmachine: (addons-446299) DBG | Using SSH client type: external
	I0920 18:13:08.324075  749135 main.go:141] libmachine: (addons-446299) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa (-rw-------)
	I0920 18:13:08.324116  749135 main.go:141] libmachine: (addons-446299) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.237 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:13:08.324134  749135 main.go:141] libmachine: (addons-446299) DBG | About to run SSH command:
	I0920 18:13:08.324145  749135 main.go:141] libmachine: (addons-446299) DBG | exit 0
	I0920 18:13:08.447247  749135 main.go:141] libmachine: (addons-446299) DBG | SSH cmd err, output: <nil>: 
	I0920 18:13:08.447526  749135 main.go:141] libmachine: (addons-446299) KVM machine creation complete!
	I0920 18:13:08.447847  749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
	I0920 18:13:08.448509  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:08.448699  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:08.448836  749135 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:13:08.448855  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:08.450187  749135 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:13:08.450200  749135 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:13:08.450206  749135 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:13:08.450212  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.452411  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.452723  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.452751  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.452850  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.453019  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.453174  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.453318  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.453492  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.453697  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.453711  749135 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:13:08.550007  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:13:08.550034  749135 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:13:08.550043  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.552709  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.553024  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.553055  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.553193  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.553387  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.553523  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.553628  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.553820  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.554035  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.554048  749135 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:13:08.651415  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:13:08.651508  749135 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:13:08.651519  749135 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:13:08.651527  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:13:08.651799  749135 buildroot.go:166] provisioning hostname "addons-446299"
	I0920 18:13:08.651833  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:13:08.652051  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.654630  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.654993  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.655016  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.655142  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.655325  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.655472  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.655580  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.655728  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.655930  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.655944  749135 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-446299 && echo "addons-446299" | sudo tee /etc/hostname
	I0920 18:13:08.764545  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-446299
	
	I0920 18:13:08.764579  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.767492  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.767918  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.767944  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.768198  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.768402  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.768591  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.768737  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.768929  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.769151  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.769174  749135 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-446299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-446299/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-446299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:13:08.875844  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:13:08.875886  749135 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:13:08.875933  749135 buildroot.go:174] setting up certificates
	I0920 18:13:08.875949  749135 provision.go:84] configureAuth start
	I0920 18:13:08.875963  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:13:08.876262  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:08.878744  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.879098  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.879119  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.879270  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.881403  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.881836  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.881865  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.881970  749135 provision.go:143] copyHostCerts
	I0920 18:13:08.882095  749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:13:08.882283  749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:13:08.882377  749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:13:08.882472  749135 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.addons-446299 san=[127.0.0.1 192.168.39.237 addons-446299 localhost minikube]
	I0920 18:13:09.208189  749135 provision.go:177] copyRemoteCerts
	I0920 18:13:09.208279  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:13:09.208315  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.211040  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.211327  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.211351  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.211544  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.211780  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.211947  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.212123  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.297180  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:13:09.320798  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:13:09.344012  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:13:09.366859  749135 provision.go:87] duration metric: took 490.878212ms to configureAuth
	I0920 18:13:09.366893  749135 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:13:09.367101  749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:13:09.367184  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.369576  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.369868  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.369896  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.370087  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.370268  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.370416  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.370568  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.370692  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:09.370898  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:09.370918  749135 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:13:09.580901  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:13:09.580930  749135 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:13:09.580938  749135 main.go:141] libmachine: (addons-446299) Calling .GetURL
	I0920 18:13:09.582415  749135 main.go:141] libmachine: (addons-446299) DBG | Using libvirt version 6000000
	I0920 18:13:09.584573  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.584892  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.584919  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.585053  749135 main.go:141] libmachine: Docker is up and running!
	I0920 18:13:09.585065  749135 main.go:141] libmachine: Reticulating splines...
	I0920 18:13:09.585073  749135 client.go:171] duration metric: took 24.047336599s to LocalClient.Create
	I0920 18:13:09.585100  749135 start.go:167] duration metric: took 24.047408021s to libmachine.API.Create "addons-446299"
	I0920 18:13:09.585116  749135 start.go:293] postStartSetup for "addons-446299" (driver="kvm2")
	I0920 18:13:09.585129  749135 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:13:09.585147  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.585408  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:13:09.585435  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.587350  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.587666  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.587695  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.587795  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.587993  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.588132  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.588235  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.664940  749135 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:13:09.669300  749135 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:13:09.669326  749135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:13:09.669399  749135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:13:09.669426  749135 start.go:296] duration metric: took 84.302482ms for postStartSetup
	I0920 18:13:09.669464  749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
	I0920 18:13:09.670097  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:09.672635  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.673027  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.673059  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.673292  749135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json ...
	I0920 18:13:09.673507  749135 start.go:128] duration metric: took 24.155298051s to createHost
	I0920 18:13:09.673535  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.675782  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.676085  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.676118  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.676239  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.676425  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.676577  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.676704  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.676850  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:09.677016  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:09.677026  749135 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:13:09.775435  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855989.751621835
	
	I0920 18:13:09.775464  749135 fix.go:216] guest clock: 1726855989.751621835
	I0920 18:13:09.775474  749135 fix.go:229] Guest: 2024-09-20 18:13:09.751621835 +0000 UTC Remote: 2024-09-20 18:13:09.673520947 +0000 UTC m=+24.255782208 (delta=78.100888ms)
	I0920 18:13:09.775526  749135 fix.go:200] guest clock delta is within tolerance: 78.100888ms
	I0920 18:13:09.775540  749135 start.go:83] releasing machines lock for "addons-446299", held for 24.257428579s
	I0920 18:13:09.775567  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.775862  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:09.778659  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.779012  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.779037  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.779220  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.779691  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.779841  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.779938  749135 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:13:09.779984  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.780090  749135 ssh_runner.go:195] Run: cat /version.json
	I0920 18:13:09.780115  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.782348  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.782682  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.782703  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.782721  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.782827  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.783033  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.783120  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.783141  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.783235  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.783325  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.783381  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.783467  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.783589  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.783728  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.855541  749135 ssh_runner.go:195] Run: systemctl --version
	I0920 18:13:09.885114  749135 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:13:10.038473  749135 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:13:10.044604  749135 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:13:10.044673  749135 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:13:10.061773  749135 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:13:10.061802  749135 start.go:495] detecting cgroup driver to use...
	I0920 18:13:10.061871  749135 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:13:10.078163  749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:13:10.092123  749135 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:13:10.092186  749135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:13:10.105354  749135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:13:10.118581  749135 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:13:10.228500  749135 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:13:10.385243  749135 docker.go:233] disabling docker service ...
	I0920 18:13:10.385317  749135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:13:10.399346  749135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:13:10.411799  749135 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:13:10.532538  749135 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:13:10.657590  749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:13:10.672417  749135 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:13:10.690910  749135 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:13:10.690989  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.701918  749135 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:13:10.702004  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.712909  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.723847  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.734707  749135 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:13:10.745859  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.756720  749135 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.781698  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.792301  749135 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:13:10.801512  749135 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:13:10.801614  749135 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:13:10.815061  749135 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:13:10.824568  749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:13:10.942263  749135 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:13:11.344964  749135 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:13:11.345085  749135 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:13:11.350594  749135 start.go:563] Will wait 60s for crictl version
	I0920 18:13:11.350677  749135 ssh_runner.go:195] Run: which crictl
	I0920 18:13:11.354600  749135 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:13:11.392003  749135 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:13:11.392112  749135 ssh_runner.go:195] Run: crio --version
	I0920 18:13:11.424468  749135 ssh_runner.go:195] Run: crio --version
	I0920 18:13:11.468344  749135 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:13:11.469889  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:11.472633  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:11.472955  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:11.472986  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:11.473236  749135 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:13:11.477639  749135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:13:11.490126  749135 kubeadm.go:883] updating cluster {Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:13:11.490246  749135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:13:11.490303  749135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:13:11.522179  749135 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:13:11.522257  749135 ssh_runner.go:195] Run: which lz4
	I0920 18:13:11.526368  749135 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:13:11.530534  749135 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:13:11.530569  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:13:12.754100  749135 crio.go:462] duration metric: took 1.227762585s to copy over tarball
	I0920 18:13:12.754195  749135 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:13:14.814758  749135 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060523421s)
	I0920 18:13:14.814798  749135 crio.go:469] duration metric: took 2.06066428s to extract the tarball
	I0920 18:13:14.814808  749135 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:13:14.850931  749135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:13:14.892855  749135 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:13:14.892884  749135 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:13:14.892894  749135 kubeadm.go:934] updating node { 192.168.39.237 8443 v1.31.1 crio true true} ...
	I0920 18:13:14.893002  749135 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-446299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:13:14.893069  749135 ssh_runner.go:195] Run: crio config
	I0920 18:13:14.935948  749135 cni.go:84] Creating CNI manager for ""
	I0920 18:13:14.935974  749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:13:14.935987  749135 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:13:14.936010  749135 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-446299 NodeName:addons-446299 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:13:14.936153  749135 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-446299"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:13:14.936224  749135 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:13:14.945879  749135 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:13:14.945951  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:13:14.955112  749135 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:13:14.971443  749135 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:13:14.987494  749135 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0920 18:13:15.004128  749135 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I0920 18:13:15.008311  749135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:13:15.020386  749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:13:15.143207  749135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:13:15.160928  749135 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299 for IP: 192.168.39.237
	I0920 18:13:15.160952  749135 certs.go:194] generating shared ca certs ...
	I0920 18:13:15.160971  749135 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.161127  749135 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:13:15.288325  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt ...
	I0920 18:13:15.288359  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt: {Name:mkd07e710befe398f359697123be87266dbb73cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.288526  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key ...
	I0920 18:13:15.288537  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key: {Name:mk8452559729a4e6fe54cdcaa3db5cb2d03b365d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.288610  749135 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:13:15.460720  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt ...
	I0920 18:13:15.460749  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt: {Name:mkd5912367400d11fe28d50162d9491c1c026ad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.460926  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key ...
	I0920 18:13:15.460946  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key: {Name:mk7b4a10567303413b299060d87451a86c82a4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.461047  749135 certs.go:256] generating profile certs ...
	I0920 18:13:15.461131  749135 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key
	I0920 18:13:15.461148  749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt with IP's: []
	I0920 18:13:15.666412  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt ...
	I0920 18:13:15.666455  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: {Name:mkef01489d7dcf2bfb46ac5af11bed50283fb691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.666668  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key ...
	I0920 18:13:15.666687  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key: {Name:mkce7236a454e2c0202c83ef853c169198fb2f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.666791  749135 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387
	I0920 18:13:15.666816  749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237]
	I0920 18:13:15.705625  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 ...
	I0920 18:13:15.705654  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387: {Name:mk64bf6bb73ff35990c8781efc3d30626dc3ca21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.705826  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387 ...
	I0920 18:13:15.705843  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387: {Name:mk18ead88f15a69013b31853d623fd0cb8c39466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.705941  749135 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt
	I0920 18:13:15.706040  749135 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key
	I0920 18:13:15.706114  749135 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key
	I0920 18:13:15.706140  749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt with IP's: []
	I0920 18:13:15.788260  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt ...
	I0920 18:13:15.788293  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt: {Name:mk5ff8fc31363db98a0f0ca7278de49be24b8420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.788475  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key ...
	I0920 18:13:15.788494  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key: {Name:mk7a90a72aaffce450a2196a523cb38d8ddfd4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.788714  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:13:15.788762  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:13:15.788796  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:13:15.788835  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:13:15.789513  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:13:15.814280  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:13:15.838979  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:13:15.861251  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:13:15.883772  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:13:15.906899  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:13:15.930055  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:13:15.952960  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:13:15.976078  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:13:15.998990  749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:13:16.015378  749135 ssh_runner.go:195] Run: openssl version
	I0920 18:13:16.021288  749135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:13:16.031743  749135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:13:16.036218  749135 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:13:16.036292  749135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:13:16.041983  749135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:13:16.052410  749135 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:13:16.056509  749135 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:13:16.056561  749135 kubeadm.go:392] StartCluster: {Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:13:16.056643  749135 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:13:16.056724  749135 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:13:16.093233  749135 cri.go:89] found id: ""
	I0920 18:13:16.093305  749135 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:13:16.103183  749135 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:13:16.112220  749135 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:13:16.121055  749135 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:13:16.121076  749135 kubeadm.go:157] found existing configuration files:
	
	I0920 18:13:16.121125  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:13:16.129727  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:13:16.129793  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:13:16.138769  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:13:16.147343  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:13:16.147401  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:13:16.156084  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:13:16.164356  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:13:16.164409  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:13:16.172957  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:13:16.181269  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:13:16.181319  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:13:16.189971  749135 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:13:16.241816  749135 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:13:16.242023  749135 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:13:16.343705  749135 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:13:16.343865  749135 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:13:16.344016  749135 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:13:16.353422  749135 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:13:16.356505  749135 out.go:235]   - Generating certificates and keys ...
	I0920 18:13:16.356621  749135 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:13:16.356707  749135 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:13:16.567905  749135 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:13:16.678138  749135 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:13:16.903150  749135 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:13:17.220781  749135 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:13:17.330970  749135 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:13:17.331262  749135 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-446299 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0920 18:13:17.404562  749135 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:13:17.404723  749135 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-446299 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0920 18:13:17.558748  749135 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:13:17.723982  749135 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:13:17.850510  749135 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:13:17.850712  749135 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:13:17.910185  749135 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:13:18.072173  749135 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:13:18.135494  749135 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:13:18.547143  749135 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:13:18.760484  749135 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:13:18.761203  749135 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:13:18.765007  749135 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:13:18.801126  749135 out.go:235]   - Booting up control plane ...
	I0920 18:13:18.801251  749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:13:18.801344  749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:13:18.801424  749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:13:18.801571  749135 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:13:18.801721  749135 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:13:18.801785  749135 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:13:18.927609  749135 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:13:18.927774  749135 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:13:19.928576  749135 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001817815s
	I0920 18:13:19.928734  749135 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:13:24.427415  749135 kubeadm.go:310] [api-check] The API server is healthy after 4.501490258s
	I0920 18:13:24.439460  749135 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:13:24.456660  749135 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:13:24.489726  749135 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:13:24.489974  749135 kubeadm.go:310] [mark-control-plane] Marking the node addons-446299 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:13:24.502419  749135 kubeadm.go:310] [bootstrap-token] Using token: 2qbco4.c4cth5cwyyzw51bf
	I0920 18:13:24.503870  749135 out.go:235]   - Configuring RBAC rules ...
	I0920 18:13:24.504029  749135 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:13:24.514334  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:13:24.520831  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:13:24.524418  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:13:24.527658  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:13:24.533751  749135 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:13:24.833210  749135 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:13:25.263206  749135 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:13:25.833304  749135 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:13:25.834184  749135 kubeadm.go:310] 
	I0920 18:13:25.834298  749135 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:13:25.834327  749135 kubeadm.go:310] 
	I0920 18:13:25.834438  749135 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:13:25.834450  749135 kubeadm.go:310] 
	I0920 18:13:25.834490  749135 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:13:25.834595  749135 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:13:25.834657  749135 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:13:25.834674  749135 kubeadm.go:310] 
	I0920 18:13:25.834745  749135 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:13:25.834754  749135 kubeadm.go:310] 
	I0920 18:13:25.834980  749135 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:13:25.834997  749135 kubeadm.go:310] 
	I0920 18:13:25.835059  749135 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:13:25.835163  749135 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:13:25.835253  749135 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:13:25.835263  749135 kubeadm.go:310] 
	I0920 18:13:25.835376  749135 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:13:25.835483  749135 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:13:25.835490  749135 kubeadm.go:310] 
	I0920 18:13:25.835595  749135 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2qbco4.c4cth5cwyyzw51bf \
	I0920 18:13:25.835757  749135 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d \
	I0920 18:13:25.835806  749135 kubeadm.go:310] 	--control-plane 
	I0920 18:13:25.835816  749135 kubeadm.go:310] 
	I0920 18:13:25.835914  749135 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:13:25.835926  749135 kubeadm.go:310] 
	I0920 18:13:25.836021  749135 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2qbco4.c4cth5cwyyzw51bf \
	I0920 18:13:25.836149  749135 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d 
	I0920 18:13:25.837593  749135 kubeadm.go:310] W0920 18:13:16.222475     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:13:25.837868  749135 kubeadm.go:310] W0920 18:13:16.223486     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:13:25.837990  749135 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:13:25.838019  749135 cni.go:84] Creating CNI manager for ""
	I0920 18:13:25.838028  749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:13:25.839751  749135 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:13:25.840949  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:13:25.852783  749135 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:13:25.871921  749135 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:13:25.871998  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:25.872010  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-446299 minikube.k8s.io/updated_at=2024_09_20T18_13_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-446299 minikube.k8s.io/primary=true
	I0920 18:13:25.893378  749135 ops.go:34] apiserver oom_adj: -16
	I0920 18:13:26.025723  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:26.526635  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:27.026038  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:27.526100  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:28.026195  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:28.526494  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:29.026560  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:29.526369  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:30.026015  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:30.116670  749135 kubeadm.go:1113] duration metric: took 4.244739753s to wait for elevateKubeSystemPrivileges
	I0920 18:13:30.116706  749135 kubeadm.go:394] duration metric: took 14.06015239s to StartCluster
	I0920 18:13:30.116726  749135 settings.go:142] acquiring lock: {Name:mk0bd1e421bf437575c076c52c1ff2f74497a1ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:30.116861  749135 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:13:30.117227  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/kubeconfig: {Name:mk275c54cf52b0ccdc22fcaa39c7b9c31092c648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:30.117422  749135 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:13:30.117448  749135 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:13:30.117512  749135 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 18:13:30.117640  749135 addons.go:69] Setting yakd=true in profile "addons-446299"
	I0920 18:13:30.117667  749135 addons.go:234] Setting addon yakd=true in "addons-446299"
	I0920 18:13:30.117700  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117727  749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:13:30.117688  749135 addons.go:69] Setting default-storageclass=true in profile "addons-446299"
	I0920 18:13:30.117804  749135 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-446299"
	I0920 18:13:30.117694  749135 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-446299"
	I0920 18:13:30.117828  749135 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-446299"
	I0920 18:13:30.117867  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117708  749135 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-446299"
	I0920 18:13:30.117998  749135 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-446299"
	I0920 18:13:30.117714  749135 addons.go:69] Setting inspektor-gadget=true in profile "addons-446299"
	I0920 18:13:30.118028  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.118044  749135 addons.go:234] Setting addon inspektor-gadget=true in "addons-446299"
	I0920 18:13:30.118082  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117716  749135 addons.go:69] Setting gcp-auth=true in profile "addons-446299"
	I0920 18:13:30.118200  749135 mustload.go:65] Loading cluster: addons-446299
	I0920 18:13:30.118199  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118219  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.117703  749135 addons.go:69] Setting ingress-dns=true in profile "addons-446299"
	I0920 18:13:30.118237  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118242  749135 addons.go:234] Setting addon ingress-dns=true in "addons-446299"
	I0920 18:13:30.118250  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118270  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.118376  749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:13:30.118380  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118401  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118492  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118530  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118647  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118678  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117720  749135 addons.go:69] Setting metrics-server=true in profile "addons-446299"
	I0920 18:13:30.118748  749135 addons.go:234] Setting addon metrics-server=true in "addons-446299"
	I0920 18:13:30.118777  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.118823  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118831  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118883  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118889  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117726  749135 addons.go:69] Setting ingress=true in profile "addons-446299"
	I0920 18:13:30.119096  749135 addons.go:234] Setting addon ingress=true in "addons-446299"
	I0920 18:13:30.119137  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117736  749135 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-446299"
	I0920 18:13:30.119353  749135 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-446299"
	I0920 18:13:30.119501  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.119521  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.119740  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.117735  749135 addons.go:69] Setting registry=true in profile "addons-446299"
	I0920 18:13:30.119761  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.119766  749135 addons.go:234] Setting addon registry=true in "addons-446299"
	I0920 18:13:30.119795  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.120169  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.120211  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117735  749135 addons.go:69] Setting cloud-spanner=true in profile "addons-446299"
	I0920 18:13:30.120247  749135 addons.go:234] Setting addon cloud-spanner=true in "addons-446299"
	I0920 18:13:30.117743  749135 addons.go:69] Setting volcano=true in profile "addons-446299"
	I0920 18:13:30.120264  749135 addons.go:234] Setting addon volcano=true in "addons-446299"
	I0920 18:13:30.120292  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.120352  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117744  749135 addons.go:69] Setting storage-provisioner=true in profile "addons-446299"
	I0920 18:13:30.120495  749135 addons.go:234] Setting addon storage-provisioner=true in "addons-446299"
	I0920 18:13:30.120536  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.120768  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.120790  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117753  749135 addons.go:69] Setting volumesnapshots=true in profile "addons-446299"
	I0920 18:13:30.120925  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.120933  749135 addons.go:234] Setting addon volumesnapshots=true in "addons-446299"
	I0920 18:13:30.120955  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.120966  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.122929  749135 out.go:177] * Verifying Kubernetes components...
	I0920 18:13:30.124310  749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:13:30.139606  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0920 18:13:30.139626  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43313
	I0920 18:13:30.139664  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
	I0920 18:13:30.139664  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0920 18:13:30.151212  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I0920 18:13:30.151245  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.151251  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34369
	I0920 18:13:30.151274  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.151393  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.151405  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.151438  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.151856  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.151891  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.152064  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152188  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152245  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152411  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152423  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.152487  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152534  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152664  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152678  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.152736  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.152850  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152861  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.152984  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152995  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.153048  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.153483  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.153515  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.154013  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0920 18:13:30.154291  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.154314  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.154382  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.154805  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.154867  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.155632  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.155794  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.155815  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.155882  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.156284  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.156326  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.159168  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.159296  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.159618  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.159652  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.159773  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.159808  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.160117  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.160143  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.160217  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.160647  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.161813  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.161856  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.164600  749135 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-446299"
	I0920 18:13:30.164649  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.165039  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.165072  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.176807  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I0920 18:13:30.177469  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.178091  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.178111  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.178583  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.179242  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.179271  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.185984  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43023
	I0920 18:13:30.186586  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.187123  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.187144  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.187554  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.188160  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.188203  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.193206  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0920 18:13:30.193417  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0920 18:13:30.193849  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.194099  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.194452  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.194471  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.194968  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.195118  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.195132  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.195349  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0920 18:13:30.195438  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.196077  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.196556  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.196580  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.197033  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.197694  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.197734  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.197960  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36057
	I0920 18:13:30.198500  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.198621  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.198726  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I0920 18:13:30.198876  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.199030  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.199369  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.199385  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.199416  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.199438  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.199710  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.200318  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.200362  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.200438  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.201288  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.201893  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.201916  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.203229  749135 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 18:13:30.204746  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 18:13:30.204766  749135 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 18:13:30.204788  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.206295  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.206675  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.207700  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I0920 18:13:30.208147  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.208668  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.208691  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.209400  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.209672  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.209714  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.210328  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.210357  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.210920  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.210948  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.211140  749135 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 18:13:30.211638  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.212145  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.212323  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.212494  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.212630  749135 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:13:30.212646  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 18:13:30.212664  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.213593  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0920 18:13:30.214660  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34213
	I0920 18:13:30.215405  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.215903  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.215924  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.216384  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.216437  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.216507  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.216537  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.216592  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0920 18:13:30.217041  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.217047  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.217305  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.217448  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.217585  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.218334  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.218356  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.218795  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.219018  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.219181  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0920 18:13:30.219880  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.219925  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.219979  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.220067  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.220460  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.220482  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.220702  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.220722  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.220787  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
	I0920 18:13:30.221095  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.221183  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.221329  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.221386  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:30.221397  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:30.223334  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.223352  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.223398  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:30.223412  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:30.223419  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:30.223427  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:30.223433  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:30.223529  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.224012  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:30.224041  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:30.224048  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 18:13:30.224154  749135 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 18:13:30.224543  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0920 18:13:30.225486  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.225509  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.226183  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.226202  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.226560  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 18:13:30.226986  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.227285  749135 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 18:13:30.227644  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.227684  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.228253  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34967
	I0920 18:13:30.228649  749135 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:13:30.228675  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 18:13:30.228697  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.229313  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
	I0920 18:13:30.229673  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.230049  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 18:13:30.230142  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.230158  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.230485  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.230672  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.231280  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.231806  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0920 18:13:30.231963  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.231988  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.232145  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.232332  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.232428  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.232440  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 18:13:30.232482  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.232696  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.233542  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.233796  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:13:30.234419  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.234438  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.234783  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 18:13:30.235010  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.235348  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.236127  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 18:13:30.236900  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0920 18:13:30.237440  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 18:13:30.237599  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0920 18:13:30.238719  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:13:30.239949  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 18:13:30.240129  749135 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:13:30.240146  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 18:13:30.240162  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.242347  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 18:13:30.243261  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.243644  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.243673  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.243908  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.244083  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.244194  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.244349  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.244407  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
	I0920 18:13:30.244610  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 18:13:30.245914  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 18:13:30.245941  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 18:13:30.245963  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.246673  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41943
	I0920 18:13:30.247429  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.247556  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.247990  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.248061  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.248074  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.248079  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.248343  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.248449  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.248449  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.248468  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.248596  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.248607  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.248648  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.248833  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.249170  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.249280  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.249352  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.249393  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.249409  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.250084  749135 addons.go:234] Setting addon default-storageclass=true in "addons-446299"
	I0920 18:13:30.250124  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.250508  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.250532  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.251170  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.251192  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.251274  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.251488  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.251857  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.251862  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.251910  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.251940  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.252078  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.252212  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.252224  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.252440  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.252553  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.252748  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.252820  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.252833  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.253735  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.253941  749135 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 18:13:30.254017  749135 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 18:13:30.253980  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.254455  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.254656  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.254870  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.254873  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.255177  749135 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:13:30.255187  749135 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 18:13:30.255205  749135 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 18:13:30.255226  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.255274  749135 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 18:13:30.255278  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.255288  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 18:13:30.255303  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.256466  749135 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 18:13:30.256532  749135 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:13:30.256552  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:13:30.256570  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.258154  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 18:13:30.259159  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 18:13:30.259174  749135 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 18:13:30.259188  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.259235  749135 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 18:13:30.260368  749135 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 18:13:30.260382  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 18:13:30.260394  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.260519  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.260844  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.260873  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.261038  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.261196  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.262948  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.263013  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.263033  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.263050  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.263161  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.263545  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.263701  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.264179  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.264417  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.264628  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.265340  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.265500  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.265732  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.265751  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.266060  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.266249  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.266266  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.266441  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.266593  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.266625  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.266670  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.266742  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.267063  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.267118  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.267232  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.267247  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.267357  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.267382  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.267549  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.267839  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.269511  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0920 18:13:30.269878  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.270901  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.270926  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.271296  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.271468  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.273221  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.274917  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0920 18:13:30.275136  749135 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 18:13:30.275446  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.276076  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.276096  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.276414  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:13:30.276440  749135 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:13:30.276461  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.276501  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.276736  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.278674  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.280057  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.280316  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.280342  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.280375  749135 out.go:177]   - Using image docker.io/busybox:stable
	I0920 18:13:30.280530  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.280706  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.280828  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.280961  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	W0920 18:13:30.281845  749135 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35600->192.168.39.237:22: read: connection reset by peer
	I0920 18:13:30.281937  749135 retry.go:31] will retry after 148.234221ms: ssh: handshake failed: read tcp 192.168.39.1:35600->192.168.39.237:22: read: connection reset by peer
	I0920 18:13:30.282766  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37633
	I0920 18:13:30.282794  749135 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 18:13:30.283193  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.283743  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.283764  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.284120  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.284286  749135 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:13:30.284302  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 18:13:30.284319  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.284696  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.284848  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.290962  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.290998  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.291015  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.291035  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.291443  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.291607  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.291761  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.301013  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0920 18:13:30.301540  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.302060  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.302090  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.302449  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.302621  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.303997  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.304220  749135 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:13:30.304236  749135 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:13:30.304256  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.307237  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.307715  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.307749  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.307899  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.308079  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.308237  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.308392  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.604495  749135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:13:30.604525  749135 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:13:30.661112  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 18:13:30.661146  749135 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 18:13:30.662437  749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 18:13:30.662469  749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 18:13:30.705589  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:13:30.750149  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 18:13:30.750187  749135 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 18:13:30.753172  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 18:13:30.755196  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:13:30.771513  749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 18:13:30.771540  749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 18:13:30.797810  749135 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 18:13:30.797835  749135 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 18:13:30.807101  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:13:30.868448  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:13:30.869944  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 18:13:30.869963  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 18:13:30.871146  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:13:30.896462  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:13:30.900930  749135 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 18:13:30.900959  749135 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 18:13:30.906831  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:13:30.906880  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 18:13:30.933744  749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 18:13:30.933774  749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 18:13:30.969038  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 18:13:30.969076  749135 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 18:13:31.000321  749135 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 18:13:31.000354  749135 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 18:13:31.182228  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 18:13:31.182256  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 18:13:31.198470  749135 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:13:31.198506  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 18:13:31.232002  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 18:13:31.232027  749135 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 18:13:31.241138  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:13:31.241162  749135 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:13:31.303359  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:13:31.303389  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 18:13:31.308659  749135 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 18:13:31.308686  749135 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 18:13:31.411918  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:13:31.444332  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 18:13:31.444368  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 18:13:31.517643  749135 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:13:31.517669  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 18:13:31.522528  749135 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 18:13:31.522555  749135 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 18:13:31.527932  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:13:31.527961  749135 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:13:31.598680  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:13:31.753266  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 18:13:31.753305  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 18:13:31.825090  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:13:31.868789  749135 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 18:13:31.868821  749135 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 18:13:31.871872  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:13:32.035165  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 18:13:32.035205  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 18:13:32.325034  749135 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 18:13:32.325068  749135 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 18:13:32.426301  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 18:13:32.426330  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 18:13:32.734227  749135 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:13:32.734252  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 18:13:32.776162  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 18:13:32.776201  749135 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 18:13:32.973816  749135 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.369238207s)
	I0920 18:13:32.973844  749135 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.369303036s)
	I0920 18:13:32.973868  749135 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 18:13:32.974717  749135 node_ready.go:35] waiting up to 6m0s for node "addons-446299" to be "Ready" ...
	I0920 18:13:32.978640  749135 node_ready.go:49] node "addons-446299" has status "Ready":"True"
	I0920 18:13:32.978660  749135 node_ready.go:38] duration metric: took 3.921107ms for node "addons-446299" to be "Ready" ...
	I0920 18:13:32.978672  749135 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:13:32.990987  749135 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:33.092955  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:13:33.125330  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 18:13:33.125357  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 18:13:33.271505  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 18:13:33.271534  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 18:13:33.497723  749135 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-446299" context rescaled to 1 replicas
	I0920 18:13:33.600812  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:13:33.600847  749135 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 18:13:33.656016  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.902807697s)
	I0920 18:13:33.656075  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656075  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.900839477s)
	I0920 18:13:33.656016  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.950386811s)
	I0920 18:13:33.656109  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656121  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656127  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656090  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656146  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656567  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.656587  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.656608  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.656624  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.656627  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.656653  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.656665  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656676  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656635  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.656718  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656637  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.656744  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.656760  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656767  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656730  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.657076  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.657118  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.657119  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.657096  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.657156  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.657263  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.657279  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.758218  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:13:35.015799  749135 pod_ready.go:103] pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:35.494820  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.687683083s)
	I0920 18:13:35.494889  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.494891  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.626405857s)
	I0920 18:13:35.494920  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.494932  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.494930  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.623755287s)
	I0920 18:13:35.494950  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.494983  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.495052  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.495370  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.495388  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.495396  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.495404  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.496899  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.496907  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.496907  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.496946  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.496958  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.496966  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.496977  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.496990  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.496999  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.497065  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.497077  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.497089  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.497098  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.497258  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.497276  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.498278  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.498290  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.498301  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.545445  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.545475  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.545718  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.545745  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.545752  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	W0920 18:13:35.545859  749135 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 18:13:35.559802  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.559831  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.560074  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.560092  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.560108  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:36.023603  749135 pod_ready.go:93] pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.023630  749135 pod_ready.go:82] duration metric: took 3.032619357s for pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.023643  749135 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.059659  749135 pod_ready.go:93] pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.059693  749135 pod_ready.go:82] duration metric: took 36.040161ms for pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.059705  749135 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.075393  749135 pod_ready.go:93] pod "etcd-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.075428  749135 pod_ready.go:82] duration metric: took 15.714418ms for pod "etcd-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.075441  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.089509  749135 pod_ready.go:93] pod "kube-apiserver-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.089536  749135 pod_ready.go:82] duration metric: took 14.086774ms for pod "kube-apiserver-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.089546  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.600534  749135 pod_ready.go:93] pod "kube-controller-manager-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.600565  749135 pod_ready.go:82] duration metric: took 511.011851ms for pod "kube-controller-manager-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.600579  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9pcgb" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.797080  749135 pod_ready.go:93] pod "kube-proxy-9pcgb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.797111  749135 pod_ready.go:82] duration metric: took 196.523175ms for pod "kube-proxy-9pcgb" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.797123  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:37.195153  749135 pod_ready.go:93] pod "kube-scheduler-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:37.195185  749135 pod_ready.go:82] duration metric: took 398.053895ms for pod "kube-scheduler-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:37.195198  749135 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:37.260708  749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 18:13:37.260749  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:37.264035  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.264543  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:37.264579  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.264739  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:37.264958  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:37.265141  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:37.265285  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:37.472764  749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 18:13:37.656998  749135 addons.go:234] Setting addon gcp-auth=true in "addons-446299"
	I0920 18:13:37.657072  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:37.657494  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:37.657545  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:37.673709  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0920 18:13:37.674398  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:37.674958  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:37.674981  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:37.675363  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:37.675843  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:37.675888  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:37.691444  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38543
	I0920 18:13:37.692042  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:37.692560  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:37.692593  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:37.693006  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:37.693249  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:37.695166  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:37.695451  749135 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 18:13:37.695481  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:37.698450  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.698921  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:37.698953  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.699128  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:37.699312  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:37.699441  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:37.699604  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:38.819493  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.922986564s)
	I0920 18:13:38.819541  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.407583803s)
	I0920 18:13:38.819575  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819591  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819607  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819648  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.220925429s)
	I0920 18:13:38.819598  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819686  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819705  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819778  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.994650356s)
	W0920 18:13:38.819815  749135 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:13:38.819840  749135 retry.go:31] will retry after 365.705658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:13:38.819845  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.947942371s)
	I0920 18:13:38.819873  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819885  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819961  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.726965652s)
	I0920 18:13:38.820001  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820012  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820227  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820244  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820285  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820295  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820413  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.820433  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.820460  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820467  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820475  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820481  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820629  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820639  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820647  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820655  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820718  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.820773  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820781  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820789  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820795  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.821299  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.821316  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.821349  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.821355  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.821365  749135 addons.go:475] Verifying addon registry=true in "addons-446299"
	I0920 18:13:38.821906  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.821917  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.821926  749135 addons.go:475] Verifying addon ingress=true in "addons-446299"
	I0920 18:13:38.821997  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822026  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.822038  749135 addons.go:475] Verifying addon metrics-server=true in "addons-446299"
	I0920 18:13:38.822070  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822084  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.822092  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.822100  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.822128  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822143  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.822495  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.822542  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822551  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.823406  749135 out.go:177] * Verifying ingress addon...
	I0920 18:13:38.823868  749135 out.go:177] * Verifying registry addon...
	I0920 18:13:38.824871  749135 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-446299 service yakd-dashboard -n yakd-dashboard
	
	I0920 18:13:38.825597  749135 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 18:13:38.826680  749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 18:13:38.844205  749135 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 18:13:38.844236  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:38.850356  749135 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:13:38.850383  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:39.186375  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:13:39.200878  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:39.330411  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:39.330769  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:39.849376  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:39.851690  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:40.361850  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:40.362230  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:41.034778  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:41.035000  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:41.038162  749135 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.342687523s)
	I0920 18:13:41.038403  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.280132041s)
	I0920 18:13:41.038461  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.038481  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.038819  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.038884  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.038905  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.038922  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.039163  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.039205  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.039225  749135 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-446299"
	I0920 18:13:41.039205  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:41.041287  749135 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 18:13:41.041290  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:13:41.043438  749135 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 18:13:41.044297  749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 18:13:41.044713  749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 18:13:41.044732  749135 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 18:13:41.101841  749135 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:13:41.101863  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:41.130328  749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 18:13:41.130361  749135 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 18:13:41.246926  749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:13:41.246950  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 18:13:41.330722  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:41.331217  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:41.367190  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:13:41.375612  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.189187999s)
	I0920 18:13:41.375679  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.375703  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.376082  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.376123  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.376131  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:41.376140  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.376180  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.376437  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.376461  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.376464  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:41.548363  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:41.701651  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:41.831758  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:41.831933  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:42.053967  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:42.331450  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:42.331860  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:42.559368  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:42.796101  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.428861154s)
	I0920 18:13:42.796164  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:42.796186  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:42.796539  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:42.796652  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:42.796628  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:42.796665  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:42.796674  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:42.796931  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:42.796948  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:42.796971  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:42.798018  749135 addons.go:475] Verifying addon gcp-auth=true in "addons-446299"
	I0920 18:13:42.799750  749135 out.go:177] * Verifying gcp-auth addon...
	I0920 18:13:42.801961  749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 18:13:42.813536  749135 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 18:13:42.813557  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:42.834100  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:42.834512  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:43.050004  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:43.305311  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:43.330407  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:43.331586  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:43.549945  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:43.702111  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:43.806287  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:43.830332  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:43.830560  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:44.050313  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:44.307181  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:44.332062  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:44.332579  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:44.549621  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:44.806074  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:44.830087  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:44.830821  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:45.049798  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:45.305355  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:45.329798  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:45.330472  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:45.549159  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:45.702368  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:45.805600  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:45.830331  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:45.831003  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:46.048681  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:46.476235  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:46.476881  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:46.477765  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:46.576766  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:46.805777  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:46.830583  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:46.831463  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:47.050496  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:47.307091  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:47.330512  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:47.331048  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:47.549305  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:47.805735  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:47.830215  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:47.831512  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:48.049902  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:48.202178  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:48.306243  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:48.329718  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:48.332280  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:48.550170  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:48.805429  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:48.829830  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:48.831490  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:49.050407  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:49.305950  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:49.331188  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:49.331284  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:49.549193  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:49.805377  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:49.831064  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:49.831335  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:50.050205  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:50.205469  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:50.306610  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:50.330226  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:50.331728  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:50.548853  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:50.806045  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:50.830924  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:50.831062  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:51.049036  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:51.305994  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:51.330295  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:51.330905  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:51.549433  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:51.805870  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:51.830479  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:51.831665  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:52.050500  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:52.305644  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:52.330460  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:52.330909  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:52.549056  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:52.700600  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:52.805458  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:52.829967  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:52.831274  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:53.049224  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:53.306145  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:53.330699  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:53.331032  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:53.548388  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:54.211235  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:54.211371  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:54.211581  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:54.212019  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:54.305931  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:54.332757  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:54.333316  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:54.550241  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:54.701439  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:54.805276  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:54.830616  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:54.831417  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:55.057083  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:55.305836  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:55.330687  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:55.331243  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:55.550673  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:55.701690  749135 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:55.701725  749135 pod_ready.go:82] duration metric: took 18.50651845s for pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:55.701734  749135 pod_ready.go:39] duration metric: took 22.723049339s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:13:55.701754  749135 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:13:55.701817  749135 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:13:55.736899  749135 api_server.go:72] duration metric: took 25.619420852s to wait for apiserver process to appear ...
	I0920 18:13:55.736929  749135 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:13:55.736952  749135 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0920 18:13:55.741901  749135 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0920 18:13:55.743609  749135 api_server.go:141] control plane version: v1.31.1
	I0920 18:13:55.743635  749135 api_server.go:131] duration metric: took 6.69997ms to wait for apiserver health ...
	I0920 18:13:55.743646  749135 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:13:55.757231  749135 system_pods.go:59] 17 kube-system pods found
	I0920 18:13:55.757585  749135 system_pods.go:61] "coredns-7c65d6cfc9-8b5fx" [226fc466-f0b5-4501-8879-b8b9b8d758ac] Running
	I0920 18:13:55.757615  749135 system_pods.go:61] "csi-hostpath-attacher-0" [b131974d-0f4b-4bc6-bec3-d4c797279aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 18:13:55.757633  749135 system_pods.go:61] "csi-hostpath-resizer-0" [684355d7-d68e-4357-8103-d8350a38ea37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 18:13:55.757647  749135 system_pods.go:61] "csi-hostpathplugin-fcmx5" [1576357c-2e2c-469a-b069-dcac225f49c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 18:13:55.757654  749135 system_pods.go:61] "etcd-addons-446299" [c82607ca-b677-4592-935a-a32dad76e79c] Running
	I0920 18:13:55.757662  749135 system_pods.go:61] "kube-apiserver-addons-446299" [93375989-de9f-4fea-afcc-44d35775ddd6] Running
	I0920 18:13:55.757668  749135 system_pods.go:61] "kube-controller-manager-addons-446299" [4c06855c-f18c-4df4-bd04-584c8594a744] Running
	I0920 18:13:55.757677  749135 system_pods.go:61] "kube-ingress-dns-minikube" [631849c1-f984-4e83-b07b-6b2ed4eb0697] Running
	I0920 18:13:55.757682  749135 system_pods.go:61] "kube-proxy-9pcgb" [934faade-c115-4ced-9bb6-c22a2fe014f2] Running
	I0920 18:13:55.757689  749135 system_pods.go:61] "kube-scheduler-addons-446299" [ce4ce9a3-dd64-47ed-a920-b6c5359c80a7] Running
	I0920 18:13:55.757697  749135 system_pods.go:61] "metrics-server-84c5f94fbc-dgfgh" [84513540-b090-4d24-b6e0-9ed764434018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:13:55.757705  749135 system_pods.go:61] "nvidia-device-plugin-daemonset-6l2l2" [c6db8268-e330-413b-9107-88c63f861e42] Running
	I0920 18:13:55.757714  749135 system_pods.go:61] "registry-66c9cd494c-vxc6t" [10b4cecb-c85b-45ef-8043-e88a81971d51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 18:13:55.757725  749135 system_pods.go:61] "registry-proxy-bqdmf" [11ab987d-a80f-412a-8a15-03a5898a2e9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 18:13:55.757738  749135 system_pods.go:61] "snapshot-controller-56fcc65765-4qwlb" [d4cd83fc-a074-4317-9b02-22010ae0ca66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.757750  749135 system_pods.go:61] "snapshot-controller-56fcc65765-8rk95" [63d1f200-a587-488c-82d3-bf38586a6fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.757759  749135 system_pods.go:61] "storage-provisioner" [0e9e378d-208e-46e0-a2be-70f96e59408a] Running
	I0920 18:13:55.757770  749135 system_pods.go:74] duration metric: took 14.117036ms to wait for pod list to return data ...
	I0920 18:13:55.757782  749135 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:13:55.762579  749135 default_sa.go:45] found service account: "default"
	I0920 18:13:55.762610  749135 default_sa.go:55] duration metric: took 4.817698ms for default service account to be created ...
	I0920 18:13:55.762622  749135 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:13:55.772780  749135 system_pods.go:86] 17 kube-system pods found
	I0920 18:13:55.772808  749135 system_pods.go:89] "coredns-7c65d6cfc9-8b5fx" [226fc466-f0b5-4501-8879-b8b9b8d758ac] Running
	I0920 18:13:55.772816  749135 system_pods.go:89] "csi-hostpath-attacher-0" [b131974d-0f4b-4bc6-bec3-d4c797279aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 18:13:55.772822  749135 system_pods.go:89] "csi-hostpath-resizer-0" [684355d7-d68e-4357-8103-d8350a38ea37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 18:13:55.772830  749135 system_pods.go:89] "csi-hostpathplugin-fcmx5" [1576357c-2e2c-469a-b069-dcac225f49c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 18:13:55.772834  749135 system_pods.go:89] "etcd-addons-446299" [c82607ca-b677-4592-935a-a32dad76e79c] Running
	I0920 18:13:55.772839  749135 system_pods.go:89] "kube-apiserver-addons-446299" [93375989-de9f-4fea-afcc-44d35775ddd6] Running
	I0920 18:13:55.772842  749135 system_pods.go:89] "kube-controller-manager-addons-446299" [4c06855c-f18c-4df4-bd04-584c8594a744] Running
	I0920 18:13:55.772847  749135 system_pods.go:89] "kube-ingress-dns-minikube" [631849c1-f984-4e83-b07b-6b2ed4eb0697] Running
	I0920 18:13:55.772851  749135 system_pods.go:89] "kube-proxy-9pcgb" [934faade-c115-4ced-9bb6-c22a2fe014f2] Running
	I0920 18:13:55.772856  749135 system_pods.go:89] "kube-scheduler-addons-446299" [ce4ce9a3-dd64-47ed-a920-b6c5359c80a7] Running
	I0920 18:13:55.772865  749135 system_pods.go:89] "metrics-server-84c5f94fbc-dgfgh" [84513540-b090-4d24-b6e0-9ed764434018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:13:55.772922  749135 system_pods.go:89] "nvidia-device-plugin-daemonset-6l2l2" [c6db8268-e330-413b-9107-88c63f861e42] Running
	I0920 18:13:55.772931  749135 system_pods.go:89] "registry-66c9cd494c-vxc6t" [10b4cecb-c85b-45ef-8043-e88a81971d51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 18:13:55.772936  749135 system_pods.go:89] "registry-proxy-bqdmf" [11ab987d-a80f-412a-8a15-03a5898a2e9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 18:13:55.772946  749135 system_pods.go:89] "snapshot-controller-56fcc65765-4qwlb" [d4cd83fc-a074-4317-9b02-22010ae0ca66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.772953  749135 system_pods.go:89] "snapshot-controller-56fcc65765-8rk95" [63d1f200-a587-488c-82d3-bf38586a6fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.772957  749135 system_pods.go:89] "storage-provisioner" [0e9e378d-208e-46e0-a2be-70f96e59408a] Running
	I0920 18:13:55.772963  749135 system_pods.go:126] duration metric: took 10.336403ms to wait for k8s-apps to be running ...
	I0920 18:13:55.772972  749135 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:13:55.773018  749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:13:55.793348  749135 system_svc.go:56] duration metric: took 20.361414ms WaitForService to wait for kubelet
	I0920 18:13:55.793389  749135 kubeadm.go:582] duration metric: took 25.675912921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:13:55.793417  749135 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:13:55.802544  749135 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:13:55.802600  749135 node_conditions.go:123] node cpu capacity is 2
	I0920 18:13:55.802617  749135 node_conditions.go:105] duration metric: took 9.193115ms to run NodePressure ...
	I0920 18:13:55.802639  749135 start.go:241] waiting for startup goroutines ...
	I0920 18:13:55.807268  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:55.834016  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:55.834628  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:56.049150  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:56.305873  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:56.331424  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:56.331798  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:56.550328  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:56.806065  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:56.829659  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:56.830161  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:57.049081  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:57.306075  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:57.329355  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:57.330540  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:57.549591  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:57.805900  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:57.830374  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:57.832330  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:58.049092  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:58.306271  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:58.329770  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:58.331160  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:58.922331  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:58.923063  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:58.923163  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:58.924173  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:59.050995  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:59.306609  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:59.410277  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:59.410618  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:59.549349  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:59.806119  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:59.829906  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:59.830124  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:00.049161  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:00.306487  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:00.330117  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:00.331103  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:00.549561  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:00.806760  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:00.831148  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:00.831297  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:01.050001  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:01.306298  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:01.407860  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:01.408083  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:01.548728  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:01.806320  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:01.830021  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:01.830689  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:02.048991  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:02.305521  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:02.330400  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:02.331175  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:02.549048  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:02.805598  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:02.830127  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:02.830327  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:03.049629  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:03.305858  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:03.331322  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:03.331679  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:03.548558  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:03.820166  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:03.830589  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:03.832021  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:04.465452  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:04.465905  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:04.465965  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:04.466066  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:04.565162  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:04.805221  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:04.830427  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:04.830573  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:05.050021  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:05.305449  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:05.330307  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:05.331288  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:05.549216  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:05.805952  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:05.830822  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:05.830882  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:06.048888  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:06.305947  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:06.330556  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:06.330915  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:06.549018  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:06.806964  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:06.841818  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:06.843261  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:07.048576  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:07.305982  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:07.330357  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:07.330437  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:07.549676  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:07.813909  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:07.830340  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:07.830795  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:08.050020  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:08.306364  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:08.330678  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:08.332935  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:08.548619  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:08.805004  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:08.830441  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:08.831560  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:09.332291  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:09.333139  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:09.333782  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:09.335034  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:09.549087  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:09.805906  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:09.829949  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:09.830348  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:10.049303  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:10.306098  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:10.329817  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:10.330883  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:10.549227  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:10.951479  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:10.951670  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:10.951904  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:11.048505  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:11.306899  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:11.330827  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:11.331176  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:11.549848  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:11.805719  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:11.830262  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:11.830606  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:12.059649  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:12.305971  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:12.329961  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:12.330563  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:12.549966  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:12.804939  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:12.829214  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:12.830837  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:13.048395  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:13.305641  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:13.331438  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:13.331605  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:13.549421  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:13.805919  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:13.831661  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:13.831730  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:14.049399  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:14.306300  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:14.329818  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:14.330774  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:14.552222  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:14.806365  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:14.829698  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:14.831887  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:15.048953  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:15.305618  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:15.330650  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:15.330943  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:15.548777  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:15.806132  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:15.830944  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:15.831352  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:16.052172  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:16.306342  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:16.329653  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:16.330883  749135 kapi.go:107] duration metric: took 37.504199599s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 18:14:16.548598  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:16.805754  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:16.830184  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:17.049843  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:17.383048  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:17.383735  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:17.550278  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:17.806058  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:17.829341  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:18.051596  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:18.306388  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:18.334664  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:18.552534  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:18.806897  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:18.830308  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:19.050045  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:19.306131  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:19.329862  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:19.550696  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:19.807045  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:19.829977  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:20.048666  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:20.306256  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:20.329911  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:20.550226  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:20.806144  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:20.830855  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:21.049583  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:21.310640  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:21.412808  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:21.549653  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:21.805953  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:21.829404  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:22.049850  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:22.315829  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:22.331862  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:22.549120  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:22.806085  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:22.829986  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:23.049654  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:23.306266  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:23.330058  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:23.560251  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:23.807013  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:23.830715  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:24.049404  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:24.306201  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:24.330512  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:24.595031  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:24.806293  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:24.907159  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:25.048965  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:25.305513  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:25.331059  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:25.549920  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:25.805287  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:25.830246  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:26.048992  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:26.306656  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:26.329987  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:26.549698  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:26.808992  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:26.829741  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:27.052649  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:27.312773  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:27.331951  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:27.562526  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:27.805604  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:27.830050  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:28.067172  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:28.306333  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:28.330924  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:28.550567  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:28.807713  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:28.836265  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:29.049440  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:29.305994  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:29.329628  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:29.551265  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:29.807081  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:29.829169  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:30.051607  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:30.308200  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:30.331298  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:30.553108  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:30.822844  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:30.831353  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:31.049853  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:31.305139  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:31.329419  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:31.549350  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:31.806142  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:31.829483  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:32.053013  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:32.306129  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:32.330537  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:32.771680  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:32.806908  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:32.831303  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:33.050163  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:33.305068  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:33.330437  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:33.548440  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:33.806177  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:33.830995  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:34.049496  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:34.310365  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:34.329994  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:34.548907  749135 kapi.go:107] duration metric: took 53.50460724s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 18:14:34.805871  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:34.830222  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:35.306762  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:35.330726  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:35.806453  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:35.830187  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:36.305548  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:36.330510  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:36.806443  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:36.829844  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:37.306287  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:37.330018  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:37.806187  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:37.829944  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:38.306428  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:38.330700  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:38.806275  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:38.830764  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:39.305577  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:39.330471  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:39.806014  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:39.829683  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:40.306572  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:40.329962  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:40.806663  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:40.830402  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:41.305985  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:41.329856  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:41.807066  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:41.829842  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:42.305779  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:42.330575  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:42.805256  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:42.829665  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:43.305345  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:43.329924  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:43.805970  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:43.829619  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:44.305067  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:44.330110  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:44.807165  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:44.832428  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:45.307073  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:45.329430  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:45.807239  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:45.829759  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:46.305795  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:46.330660  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:46.807307  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:46.829950  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:47.306710  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:47.330054  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:47.806495  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:47.830576  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:48.305615  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:48.330601  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:48.805326  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:48.829994  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:49.306221  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:49.330067  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:49.807517  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:49.831847  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:50.312486  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:50.412022  749135 kapi.go:107] duration metric: took 1m11.586419635s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 18:14:50.805525  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:51.306784  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:51.919819  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:52.306451  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:52.809242  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:53.318752  749135 kapi.go:107] duration metric: took 1m10.516788064s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 18:14:53.320395  749135 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-446299 cluster.
	I0920 18:14:53.321854  749135 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 18:14:53.323252  749135 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 18:14:53.324985  749135 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 18:14:53.326283  749135 addons.go:510] duration metric: took 1m23.208765269s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 18:14:53.326342  749135 start.go:246] waiting for cluster config update ...
	I0920 18:14:53.326365  749135 start.go:255] writing updated cluster config ...
	I0920 18:14:53.326710  749135 ssh_runner.go:195] Run: rm -f paused
	I0920 18:14:53.387365  749135 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:14:53.389186  749135 out.go:177] * Done! kubectl is now configured to use "addons-446299" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.298080817Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.322454408Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844.LEM8T2\"" file="server/server.go:805"
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.322599019Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844.LEM8T2\"" file="server/server.go:805"
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.322732387Z" level=debug msg="Container or sandbox exited: 3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844.LEM8T2" file="server/server.go:810"
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.322619453Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844\"" file="server/server.go:805"
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.322792490Z" level=debug msg="Container or sandbox exited: 3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844" file="server/server.go:810"
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.322810987Z" level=debug msg="container exited and found: 3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844" file="server/server.go:825"
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.322627511Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844.LEM8T2\"" file="server/server.go:805"
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.340285251Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa8dc963-5d57-4e5b-97a4-6e9df59826de name=/runtime.v1.RuntimeService/Version
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.340407807Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa8dc963-5d57-4e5b-97a4-6e9df59826de name=/runtime.v1.RuntimeService/Version
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.349561376Z" level=debug msg="Unmounted container 3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844" file="storage/runtime.go:495" id=cf3b10b6-05a5-4721-8b3f-7b9cb57dcf91 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.349983537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d931258-39ac-4278-985c-9707193f2906 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.351039089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856889351011708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d931258-39ac-4278-985c-9707193f2906 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.354004726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d4f887b-738c-4701-99af-2a8258027738 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.354084069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d4f887b-738c-4701-99af-2a8258027738 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.354532657Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844,PodSandboxId:dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726856057582231326,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-dgfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84513540-b090-4d24-b6e0-9ed764434018,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path
-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.contai
ner.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1
726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaa
a5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandboxId:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf21825fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f2168ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856000233156133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175
ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d4f887b-738c-4701-99af-2a8258027738 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.366576017Z" level=debug msg="Found exit code for 3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844: 0" file="oci/runtime_oci.go:1022"
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.366821387Z" level=debug msg="Skipping status update for: &{State:{Version:1.0.2-dev ID:3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844 Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.hash:d807d4fe io.kubernetes.container.name:metrics-server io.kubernetes.container.ports:[{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}] io.kubernetes.container.restartCount:0 io.kubernetes.container.terminationMessagePath:/dev/termination-log io.kubernetes.container.terminationMessagePolicy:File io.kubernetes.cri-o.Annotations:{\"io.kubernetes.container.hash\":\"d807d4fe\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"https\\\",\\\"containerPort\\\":4443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.c
ontainer.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"} io.kubernetes.cri-o.ContainerID:3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844 io.kubernetes.cri-o.ContainerType:container io.kubernetes.cri-o.Created:2024-09-20T18:14:17.582314554Z io.kubernetes.cri-o.IP.0:10.244.0.10 io.kubernetes.cri-o.Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a io.kubernetes.cri-o.ImageName:registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 io.kubernetes.cri-o.ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c io.kubernetes.cri-o.Labels:{\"io.kubernetes.container.name\":\"metrics-server\",\"io.kubernetes.pod.name\":\"metrics-server-84c5f94fbc-dgfgh\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"84513540
-b090-4d24-b6e0-9ed764434018\"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_metrics-server-84c5f94fbc-dgfgh_84513540-b090-4d24-b6e0-9ed764434018/metrics-server/0.log io.kubernetes.cri-o.Metadata:{\"name\":\"metrics-server\"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/058d978730b060ecdbacf617dfcdfe1299ab41927a60d439d19de71c3953e20c/merged io.kubernetes.cri-o.Name:k8s_metrics-server_metrics-server-84c5f94fbc-dgfgh_kube-system_84513540-b090-4d24-b6e0-9ed764434018_0 io.kubernetes.cri-o.PlatformRuntimePath: io.kubernetes.cri-o.ResolvPath:/var/run/containers/storage/overlay-containers/dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992/userdata/resolv.conf io.kubernetes.cri-o.SandboxID:dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992 io.kubernetes.cri-o.SandboxName:k8s_metrics-server-84c5f94fbc-dgfgh_kube-system_84513540-b090-4d24-b6e0-9ed764434018_0 io.kubernetes.cri-o.SeccompProfilePath:Unconfined io.kubernetes.cri-o.Stdin:false io.kubernetes.cri-o.St
dinOnce:false io.kubernetes.cri-o.TTY:false io.kubernetes.cri-o.Volumes:[{\"container_path\":\"/tmp\",\"host_path\":\"/var/lib/kubelet/pods/84513540-b090-4d24-b6e0-9ed764434018/volumes/kubernetes.io~empty-dir/tmp-dir\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/84513540-b090-4d24-b6e0-9ed764434018/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/84513540-b090-4d24-b6e0-9ed764434018/containers/metrics-server/3059f1c8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/84513540-b090-4d24-b6e0-9ed764434018/volumes/kubernetes.io~projected/kube-api-access-grgvf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}] io.kubernetes.pod.name:metrics-server-84c5f94fbc-dgfgh io.kubernetes.
pod.namespace:kube-system io.kubernetes.pod.terminationGracePeriod:30 io.kubernetes.pod.uid:84513540-b090-4d24-b6e0-9ed764434018 kubernetes.io/config.seen:2024-09-20T18:13:36.477257976Z kubernetes.io/config.source:api]} Created:2024-09-20 18:14:17.623875809 +0000 UTC Started:2024-09-20 18:14:17.659452623 +0000 UTC m=+66.394933042 Finished:2024-09-20 18:28:09.32147772 +0000 UTC ExitCode:0xc0014e4ea0 OOMKilled:false SeccompKilled:false Error: InitPid:4585 InitStartTime:8788 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}" file="oci/runtime_oci.go:946" id=cf3b10b6-05a5-4721-8b3f-7b9cb57dcf91 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.370803626Z" level=info msg="Stopped container 3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844: kube-system/metrics-server-84c5f94fbc-dgfgh/metrics-server" file="server/container_stop.go:29" id=cf3b10b6-05a5-4721-8b3f-7b9cb57dcf91 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.370912012Z" level=debug msg="Response: &StopContainerResponse{}" file="otel-collector/interceptors.go:74" id=cf3b10b6-05a5-4721-8b3f-7b9cb57dcf91 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.371063827Z" level=debug msg="Event: REMOVE        \"/var/run/crio/exits/3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844\"" file="server/server.go:805"
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.371507639Z" level=debug msg="Request: &StopPodSandboxRequest{PodSandboxId:dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992,}" file="otel-collector/interceptors.go:62" id=e8a5e001-21cb-4bd4-826c-95f5806da0c6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.371567863Z" level=info msg="Stopping pod sandbox: dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992" file="server/sandbox_stop.go:18" id=e8a5e001-21cb-4bd4-826c-95f5806da0c6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.371928850Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-dgfgh Namespace:kube-system ID:dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992 UID:84513540-b090-4d24-b6e0-9ed764434018 NetNS:/var/run/netns/1269bd1b-0bd3-45e1-9827-ed86c1565b47 Networks:[{Name:bridge Ifname:eth0}] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/pod84513540-b090-4d24-b6e0-9ed764434018 PodAnnotations:0xc001db6aa8}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	Sep 20 18:28:09 addons-446299 crio[659]: time="2024-09-20 18:28:09.372138338Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-dgfgh from CNI network \"bridge\" (type=bridge)" file="ocicni/ocicni.go:667"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	7c4b9c3a7c539       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 13 minutes ago      Running             gcp-auth                                 0                   efe0ec0dcbcc2       gcp-auth-89d5ffd79-9scf7
	ba7dc5faa58b7       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             13 minutes ago      Running             controller                               0                   75840320e5280       ingress-nginx-controller-bc57996ff-8kt58
	b094e7c30c796       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          13 minutes ago      Running             csi-snapshotter                          0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	bed98529d363a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          13 minutes ago      Running             csi-provisioner                          0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	69da68d150b2a       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            13 minutes ago      Running             liveness-probe                           0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	fd9ca7a3ca987       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           13 minutes ago      Running             hostpath                                 0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	5a2b6759c0bf9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                13 minutes ago      Running             node-driver-registrar                    0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	66723f0443fe2       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              13 minutes ago      Running             csi-resizer                              0                   00b4d98c29779       csi-hostpath-resizer-0
	c917700eb7747       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             13 minutes ago      Running             csi-attacher                             0                   3ffd6a03ee490       csi-hostpath-attacher-0
	509b6bbf231a9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   13 minutes ago      Running             csi-external-health-monitor-controller   0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	e86a2c89e146b       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             13 minutes ago      Exited              patch                                    1                   a24f9a7c28487       ingress-nginx-admission-patch-2mwr8
	bf44e059a196a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   13 minutes ago      Exited              create                                   0                   1938162f16084       ingress-nginx-admission-create-sdwls
	33f5bce9e468f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      13 minutes ago      Running             volume-snapshot-controller               0                   46ab05da30745       snapshot-controller-56fcc65765-4qwlb
	cbf9321604592       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      13 minutes ago      Running             volume-snapshot-controller               0                   f64e4538489ab       snapshot-controller-56fcc65765-8rk95
	3c3b736165a00       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        13 minutes ago      Exited              metrics-server                           0                   dd8942402304f       metrics-server-84c5f94fbc-dgfgh
	b425ff4f976af       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             14 minutes ago      Running             local-path-provisioner                   0                   a0bef6fd3ee4b       local-path-provisioner-86d989889c-tvbgx
	68195d8abd2e3       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             14 minutes ago      Running             minikube-ingress-dns                     0                   50aa8158427c9       kube-ingress-dns-minikube
	123e17c57dc2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             14 minutes ago      Running             storage-provisioner                      0                   2de8a3616c782       storage-provisioner
	d52dc29cba22a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             14 minutes ago      Running             coredns                                  0                   a7fdf4add17f8       coredns-7c65d6cfc9-8b5fx
	371fb9f89e965       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             14 minutes ago      Running             kube-proxy                               0                   5aa37b64d2a9c       kube-proxy-9pcgb
	730952f4127d6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             14 minutes ago      Running             kube-apiserver                           0                   403b403cdf218       kube-apiserver-addons-446299
	e9e7734f58847       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             14 minutes ago      Running             kube-scheduler                           0                   4306bc0f35baa       kube-scheduler-addons-446299
	a8af18aadd9a1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             14 minutes ago      Running             kube-controller-manager                  0                   859cc747f1c82       kube-controller-manager-addons-446299
	402ab000bdb93       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             14 minutes ago      Running             etcd                                     0                   17de22cbd91b4       etcd-addons-446299
	
	
	==> coredns [d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a] <==
	[INFO] 127.0.0.1:45092 - 31226 "HINFO IN 8537533385009167611.1098357581305743543. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017946303s
	[INFO] 10.244.0.7:50895 - 60070 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000864499s
	[INFO] 10.244.0.7:50895 - 30883 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.004754851s
	[INFO] 10.244.0.7:60479 - 45291 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000276551s
	[INFO] 10.244.0.7:60479 - 60648 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000259587s
	[INFO] 10.244.0.7:34337 - 50221 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103649s
	[INFO] 10.244.0.7:34337 - 3119 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000190818s
	[INFO] 10.244.0.7:50579 - 48699 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149541s
	[INFO] 10.244.0.7:50579 - 13882 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00029954s
	[INFO] 10.244.0.7:52674 - 19194 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100903s
	[INFO] 10.244.0.7:52674 - 48616 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131897s
	[INFO] 10.244.0.7:34842 - 24908 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052174s
	[INFO] 10.244.0.7:34842 - 17742 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131345s
	[INFO] 10.244.0.7:58542 - 36156 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047177s
	[INFO] 10.244.0.7:58542 - 62014 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148973s
	[INFO] 10.244.0.7:34082 - 14251 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000145316s
	[INFO] 10.244.0.7:34082 - 45485 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000238133s
	[INFO] 10.244.0.21:56997 - 31030 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000537673s
	[INFO] 10.244.0.21:35720 - 34441 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147988s
	[INFO] 10.244.0.21:53795 - 23425 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001554s
	[INFO] 10.244.0.21:58869 - 385 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122258s
	[INFO] 10.244.0.21:37326 - 35127 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00023415s
	[INFO] 10.244.0.21:35448 - 47752 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126595s
	[INFO] 10.244.0.21:41454 - 25870 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003639103s
	[INFO] 10.244.0.21:51708 - 51164 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00402176s
	
	
	==> describe nodes <==
	Name:               addons-446299
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-446299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-446299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_13_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-446299
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-446299"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:13:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-446299
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:28:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:23:27 +0000   Fri, 20 Sep 2024 18:13:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:23:27 +0000   Fri, 20 Sep 2024 18:13:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:23:27 +0000   Fri, 20 Sep 2024 18:13:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:23:27 +0000   Fri, 20 Sep 2024 18:13:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    addons-446299
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b51819720d24a4988f4faf5cbed4e8f
	  System UUID:                6b518197-20d2-4a49-88f4-faf5cbed4e8f
	  Boot ID:                    431228fc-f5a8-4282-bf7e-10c36798659f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  gcp-auth                    gcp-auth-89d5ffd79-9scf7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8kt58    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-8b5fx                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpathplugin-fcmx5                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-addons-446299                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-446299                250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-446299       200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-9pcgb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-446299                100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-56fcc65765-4qwlb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-56fcc65765-8rk95        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-tvbgx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node addons-446299 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node addons-446299 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node addons-446299 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m   kubelet          Node addons-446299 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node addons-446299 event: Registered Node addons-446299 in Controller
	
	
	==> dmesg <==
	[  +5.305303] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
	[  +0.141616] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.046436] kauditd_printk_skb: 135 callbacks suppressed
	[  +5.120665] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.997269] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.458196] kauditd_printk_skb: 5 callbacks suppressed
	[Sep20 18:14] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.706525] kauditd_printk_skb: 34 callbacks suppressed
	[ +16.244583] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.135040] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.940354] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.767745] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.007018] kauditd_printk_skb: 48 callbacks suppressed
	[Sep20 18:15] kauditd_printk_skb: 10 callbacks suppressed
	[Sep20 18:16] kauditd_printk_skb: 30 callbacks suppressed
	[Sep20 18:17] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:20] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:22] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:23] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.877503] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.382620] kauditd_printk_skb: 41 callbacks suppressed
	[  +8.681981] kauditd_printk_skb: 39 callbacks suppressed
	[ +13.570039] kauditd_printk_skb: 14 callbacks suppressed
	[Sep20 18:24] kauditd_printk_skb: 2 callbacks suppressed
	[ +30.180557] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551] <==
	{"level":"warn","ts":"2024-09-20T18:14:32.753190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.800719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:32.753227Z","caller":"traceutil/trace.go:171","msg":"trace[543841858] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1058; }","duration":"350.836502ms","start":"2024-09-20T18:14:32.402385Z","end":"2024-09-20T18:14:32.753221Z","steps":["trace[543841858] 'agreement among raft nodes before linearized reading'  (duration: 350.779906ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:32.753246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:14:32.402356Z","time spent":"350.885838ms","remote":"127.0.0.1:36780","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-20T18:14:32.753338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.730876ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:32.753372Z","caller":"traceutil/trace.go:171","msg":"trace[1542998802] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1058; }","duration":"340.769961ms","start":"2024-09-20T18:14:32.412597Z","end":"2024-09-20T18:14:32.753367Z","steps":["trace[1542998802] 'agreement among raft nodes before linearized reading'  (duration: 340.724283ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:32.753846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.265355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:32.753903Z","caller":"traceutil/trace.go:171","msg":"trace[581069886] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1058; }","duration":"217.327931ms","start":"2024-09-20T18:14:32.536567Z","end":"2024-09-20T18:14:32.753895Z","steps":["trace[581069886] 'agreement among raft nodes before linearized reading'  (duration: 217.246138ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:51.903628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.538818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-20T18:14:51.904065Z","caller":"traceutil/trace.go:171","msg":"trace[2043860769] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1117; }","duration":"144.082045ms","start":"2024-09-20T18:14:51.759954Z","end":"2024-09-20T18:14:51.904036Z","steps":["trace[2043860769] 'count revisions from in-memory index tree'  (duration: 143.478073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:51.904831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.923374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:51.904891Z","caller":"traceutil/trace.go:171","msg":"trace[386261722] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"111.005288ms","start":"2024-09-20T18:14:51.793876Z","end":"2024-09-20T18:14:51.904881Z","steps":["trace[386261722] 'range keys from in-memory index tree'  (duration: 110.882796ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:23:04.403949Z","caller":"traceutil/trace.go:171","msg":"trace[1232773900] linearizableReadLoop","detail":"{readStateIndex:2064; appliedIndex:2063; }","duration":"137.955638ms","start":"2024-09-20T18:23:04.265959Z","end":"2024-09-20T18:23:04.403914Z","steps":["trace[1232773900] 'read index received'  (duration: 137.83631ms)","trace[1232773900] 'applied index is now lower than readState.Index'  (duration: 118.922µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:23:04.404190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.160514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:23:04.404218Z","caller":"traceutil/trace.go:171","msg":"trace[1586547199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1925; }","duration":"138.254725ms","start":"2024-09-20T18:23:04.265955Z","end":"2024-09-20T18:23:04.404210Z","steps":["trace[1586547199] 'agreement among raft nodes before linearized reading'  (duration: 138.105756ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:23:04.404422Z","caller":"traceutil/trace.go:171","msg":"trace[700372140] transaction","detail":"{read_only:false; response_revision:1925; number_of_response:1; }","duration":"379.764994ms","start":"2024-09-20T18:23:04.024645Z","end":"2024-09-20T18:23:04.404410Z","steps":["trace[700372140] 'process raft request'  (duration: 379.19458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:23:04.404517Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:23:04.024622Z","time spent":"379.814521ms","remote":"127.0.0.1:36928","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1924 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-20T18:23:21.256394Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1506}
	{"level":"info","ts":"2024-09-20T18:23:21.288238Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1506,"took":"31.314726ms","hash":517065302,"current-db-size-bytes":7016448,"current-db-size":"7.0 MB","current-db-size-in-use-bytes":4055040,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2024-09-20T18:23:21.288299Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":517065302,"revision":1506,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T18:23:22.430993Z","caller":"traceutil/trace.go:171","msg":"trace[200479020] transaction","detail":"{read_only:false; response_revision:2108; number_of_response:1; }","duration":"314.888557ms","start":"2024-09-20T18:23:22.116093Z","end":"2024-09-20T18:23:22.430981Z","steps":["trace[200479020] 'process raft request'  (duration: 314.552392ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:23:22.431107Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:23:22.116078Z","time spent":"314.951125ms","remote":"127.0.0.1:37058","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2038 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:425 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-09-20T18:23:23.865254Z","caller":"traceutil/trace.go:171","msg":"trace[102178879] linearizableReadLoop","detail":"{readStateIndex:2258; appliedIndex:2257; }","duration":"203.488059ms","start":"2024-09-20T18:23:23.661753Z","end":"2024-09-20T18:23:23.865241Z","steps":["trace[102178879] 'read index received'  (duration: 203.347953ms)","trace[102178879] 'applied index is now lower than readState.Index'  (duration: 139.623µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:23:23.865357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.585815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:23:23.865380Z","caller":"traceutil/trace.go:171","msg":"trace[1945616439] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2110; }","duration":"203.624964ms","start":"2024-09-20T18:23:23.661749Z","end":"2024-09-20T18:23:23.865374Z","steps":["trace[1945616439] 'agreement among raft nodes before linearized reading'  (duration: 203.546895ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:23:23.865639Z","caller":"traceutil/trace.go:171","msg":"trace[1429413700] transaction","detail":"{read_only:false; response_revision:2110; number_of_response:1; }","duration":"210.845365ms","start":"2024-09-20T18:23:23.654785Z","end":"2024-09-20T18:23:23.865631Z","steps":["trace[1429413700] 'process raft request'  (duration: 210.352466ms)"],"step_count":1}
	
	
	==> gcp-auth [7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228] <==
	2024/09/20 18:14:53 Ready to write response ...
	2024/09/20 18:14:55 Ready to marshal response ...
	2024/09/20 18:14:55 Ready to write response ...
	2024/09/20 18:14:55 Ready to marshal response ...
	2024/09/20 18:14:55 Ready to write response ...
	2024/09/20 18:22:59 Ready to marshal response ...
	2024/09/20 18:22:59 Ready to write response ...
	2024/09/20 18:22:59 Ready to marshal response ...
	2024/09/20 18:22:59 Ready to write response ...
	2024/09/20 18:22:59 Ready to marshal response ...
	2024/09/20 18:22:59 Ready to write response ...
	2024/09/20 18:23:05 Ready to marshal response ...
	2024/09/20 18:23:05 Ready to write response ...
	2024/09/20 18:23:05 Ready to marshal response ...
	2024/09/20 18:23:05 Ready to write response ...
	2024/09/20 18:23:10 Ready to marshal response ...
	2024/09/20 18:23:10 Ready to write response ...
	2024/09/20 18:23:15 Ready to marshal response ...
	2024/09/20 18:23:15 Ready to write response ...
	2024/09/20 18:23:18 Ready to marshal response ...
	2024/09/20 18:23:18 Ready to write response ...
	2024/09/20 18:23:29 Ready to marshal response ...
	2024/09/20 18:23:29 Ready to write response ...
	2024/09/20 18:23:37 Ready to marshal response ...
	2024/09/20 18:23:37 Ready to write response ...
	
	
	==> kernel <==
	 18:28:09 up 15 min,  0 users,  load average: 0.05, 0.18, 0.25
	Linux addons-446299 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 18:15:27.823202       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:15:27.823313       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 18:15:27.823420       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:15:27.823588       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:15:27.824490       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:15:27.825326       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0920 18:15:31.828151       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.147.48:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.147.48:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	W0920 18:15:31.828390       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:15:31.828450       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:15:31.847786       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0920 18:15:31.853561       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0920 18:22:59.185908       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.29.221"}
	I0920 18:23:23.918494       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 18:23:25.009930       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 18:23:29.482103       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 18:23:29.675487       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.190.241"}
	I0920 18:23:30.728395       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e] <==
	I0920 18:23:29.815035       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0920 18:23:29.815092       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:23:30.342351       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0920 18:23:30.342391       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:23:34.046159       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0920 18:23:34.852782       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:23:34.852837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:23:45.509339       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:23:45.509390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:24:03.134228       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:24:03.134359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:24:11.155220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.255µs"
	W0920 18:24:29.364098       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:24:29.364246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:25:01.947190       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:25:01.947288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:25:33.105344       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:25:33.105500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:26:14.610422       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:26:14.610571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:27:08.759968       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:27:08.760083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:27:45.244240       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:27:45.244314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:28:08.201785       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="9.76µs"
	
	
	==> kube-proxy [371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:13:32.095684       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:13:32.111185       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.237"]
	E0920 18:13:32.111246       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:13:32.254832       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:13:32.254884       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:13:32.254908       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:13:32.262039       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:13:32.262450       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:13:32.262484       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:13:32.268397       1 config.go:199] "Starting service config controller"
	I0920 18:13:32.268443       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:13:32.268473       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:13:32.268477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:13:32.268988       1 config.go:328] "Starting node config controller"
	I0920 18:13:32.268994       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:13:32.368877       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:13:32.368886       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:13:32.369073       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072] <==
	W0920 18:13:22.809246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 18:13:22.809282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.809585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:13:22.809621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.813253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:13:22.813298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.813377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:13:22.813413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.813464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:13:22.813478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.815129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:13:22.815174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.637031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 18:13:23.637068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.746262       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:13:23.746361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.943434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:13:23.943536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.956043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:13:23.956129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.968884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:13:23.969017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:24.340405       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:13:24.340516       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 18:13:27.096843       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:27:35 addons-446299 kubelet[1199]: E0920 18:27:35.579175    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856855578407910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:27:39 addons-446299 kubelet[1199]: E0920 18:27:39.167958    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
	Sep 20 18:27:44 addons-446299 kubelet[1199]: E0920 18:27:44.167142    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="c0105316-5ff3-4ccd-8862-0a9a1965982f"
	Sep 20 18:27:45 addons-446299 kubelet[1199]: E0920 18:27:45.581834    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856865581256799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:27:45 addons-446299 kubelet[1199]: E0920 18:27:45.581877    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856865581256799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:27:52 addons-446299 kubelet[1199]: E0920 18:27:52.166780    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
	Sep 20 18:27:55 addons-446299 kubelet[1199]: E0920 18:27:55.584648    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856875584294299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:27:55 addons-446299 kubelet[1199]: E0920 18:27:55.584756    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856875584294299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:28:05 addons-446299 kubelet[1199]: E0920 18:28:05.587847    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856885587246336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:28:05 addons-446299 kubelet[1199]: E0920 18:28:05.588116    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856885587246336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:28:07 addons-446299 kubelet[1199]: E0920 18:28:07.166785    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
	Sep 20 18:28:09 addons-446299 kubelet[1199]: E0920 18:28:09.285814    1199 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 20 18:28:09 addons-446299 kubelet[1199]: E0920 18:28:09.285890    1199 kuberuntime_image.go:55] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 20 18:28:09 addons-446299 kubelet[1199]: E0920 18:28:09.286349    1199 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8zg4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:
,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_default(e00699c2-7689-43aa-9a79-f6b8682fbe91): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 20 18:28:09 addons-446299 kubelet[1199]: E0920 18:28:09.288782    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e00699c2-7689-43aa-9a79-f6b8682fbe91"
	Sep 20 18:28:09 addons-446299 kubelet[1199]: I0920 18:28:09.606423    1199 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grgvf\" (UniqueName: \"kubernetes.io/projected/84513540-b090-4d24-b6e0-9ed764434018-kube-api-access-grgvf\") pod \"84513540-b090-4d24-b6e0-9ed764434018\" (UID: \"84513540-b090-4d24-b6e0-9ed764434018\") "
	Sep 20 18:28:09 addons-446299 kubelet[1199]: I0920 18:28:09.606473    1199 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/84513540-b090-4d24-b6e0-9ed764434018-tmp-dir\") pod \"84513540-b090-4d24-b6e0-9ed764434018\" (UID: \"84513540-b090-4d24-b6e0-9ed764434018\") "
	Sep 20 18:28:09 addons-446299 kubelet[1199]: I0920 18:28:09.606928    1199 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84513540-b090-4d24-b6e0-9ed764434018-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "84513540-b090-4d24-b6e0-9ed764434018" (UID: "84513540-b090-4d24-b6e0-9ed764434018"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 20 18:28:09 addons-446299 kubelet[1199]: I0920 18:28:09.616881    1199 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84513540-b090-4d24-b6e0-9ed764434018-kube-api-access-grgvf" (OuterVolumeSpecName: "kube-api-access-grgvf") pod "84513540-b090-4d24-b6e0-9ed764434018" (UID: "84513540-b090-4d24-b6e0-9ed764434018"). InnerVolumeSpecName "kube-api-access-grgvf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:28:09 addons-446299 kubelet[1199]: I0920 18:28:09.692838    1199 scope.go:117] "RemoveContainer" containerID="3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844"
	Sep 20 18:28:09 addons-446299 kubelet[1199]: I0920 18:28:09.706914    1199 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/84513540-b090-4d24-b6e0-9ed764434018-tmp-dir\") on node \"addons-446299\" DevicePath \"\""
	Sep 20 18:28:09 addons-446299 kubelet[1199]: I0920 18:28:09.706954    1199 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-grgvf\" (UniqueName: \"kubernetes.io/projected/84513540-b090-4d24-b6e0-9ed764434018-kube-api-access-grgvf\") on node \"addons-446299\" DevicePath \"\""
	Sep 20 18:28:09 addons-446299 kubelet[1199]: I0920 18:28:09.729935    1199 scope.go:117] "RemoveContainer" containerID="3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844"
	Sep 20 18:28:09 addons-446299 kubelet[1199]: E0920 18:28:09.730791    1199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844\": container with ID starting with 3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844 not found: ID does not exist" containerID="3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844"
	Sep 20 18:28:09 addons-446299 kubelet[1199]: I0920 18:28:09.730831    1199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844"} err="failed to get container status \"3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844\": rpc error: code = NotFound desc = could not find container \"3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844\": container with ID starting with 3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844 not found: ID does not exist"
	
	
	==> storage-provisioner [123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0] <==
	I0920 18:13:37.673799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:13:37.889195       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:13:37.889268       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:13:37.991169       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:13:37.991374       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8!
	I0920 18:13:37.992328       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e2a2b2a-26e5-43f5-ad91-442df4e21dfd", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8 became leader
	I0920 18:13:38.191750       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-446299 -n addons-446299
helpers_test.go:261: (dbg) Run:  kubectl --context addons-446299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-446299 describe pod busybox nginx task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-446299 describe pod busybox nginx task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8: exit status 1 (85.986688ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-446299/192.168.39.237
	Start Time:       Fri, 20 Sep 2024 18:14:55 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6l6f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s6l6f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  13m                  default-scheduler  Successfully assigned default/busybox to addons-446299
	  Normal   Pulling    11m (x4 over 13m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 13m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 13m)    kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m6s (x42 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-446299/192.168.39.237
	Start Time:       Fri, 20 Sep 2024 18:23:29 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zg4g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8zg4g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m41s                default-scheduler  Successfully assigned default/nginx to addons-446299
	  Warning  Failed     4m9s                 kubelet            Failed to pull image "docker.io/nginx:alpine": copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     96s (x2 over 3m8s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    60s (x5 over 4m8s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     60s (x5 over 4m8s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    46s (x4 over 4m40s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     1s (x4 over 4m9s)    kubelet            Error: ErrImagePull
	  Warning  Failed     1s                   kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-446299/192.168.39.237
	Start Time:       Fri, 20 Sep 2024 18:23:37 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zzgp9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-zzgp9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m33s                default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-446299
	  Warning  Failed     2m7s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     65s (x2 over 3m38s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     65s (x3 over 3m38s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    26s (x5 over 3m38s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     26s (x5 over 3m38s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    15s (x4 over 4m32s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sdwls" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2mwr8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-446299 describe pod busybox nginx task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (294.60s)

                                                
                                    
x
+
TestAddons/parallel/CSI (384.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 18:23:16.405035  748497 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 18:23:16.411705  748497 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 18:23:16.411729  748497 kapi.go:107] duration metric: took 6.727267ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 6.73583ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-446299 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-446299 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [06bcc07b-77e9-48cd-975b-e1a2ff9ba523] Pending
helpers_test.go:344: "task-pv-pod" [06bcc07b-77e9-48cd-975b-e1a2ff9ba523] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [06bcc07b-77e9-48cd-975b-e1a2ff9ba523] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004344535s
addons_test.go:528: (dbg) Run:  kubectl --context addons-446299 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-446299 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-446299 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-446299 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-446299 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-446299 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-446299 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c0105316-5ff3-4ccd-8862-0a9a1965982f] Pending
helpers_test.go:344: "task-pv-pod-restore" [c0105316-5ff3-4ccd-8862-0a9a1965982f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod-restore" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:565: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod-restore" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:565: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-446299 -n addons-446299
addons_test.go:565: TestAddons/parallel/CSI: showing logs for failed pods as of 2024-09-20 18:29:38.158695691 +0000 UTC m=+1030.004340283
addons_test.go:565: (dbg) Run:  kubectl --context addons-446299 describe po task-pv-pod-restore -n default
addons_test.go:565: (dbg) kubectl --context addons-446299 describe po task-pv-pod-restore -n default:
Name:             task-pv-pod-restore
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-446299/192.168.39.237
Start Time:       Fri, 20 Sep 2024 18:23:37 +0000
Labels:           app=task-pv-pod-restore
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zzgp9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc-restore
ReadOnly:   false
kube-api-access-zzgp9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  6m1s                default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-446299
Warning  Failed     3m35s               kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    103s (x4 over 6m)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     59s (x3 over 5m6s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     59s (x4 over 5m6s)  kubelet            Error: ErrImagePull
Normal   BackOff    31s (x7 over 5m6s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     31s (x7 over 5m6s)  kubelet            Error: ImagePullBackOff
addons_test.go:565: (dbg) Run:  kubectl --context addons-446299 logs task-pv-pod-restore -n default
addons_test.go:565: (dbg) Non-zero exit: kubectl --context addons-446299 logs task-pv-pod-restore -n default: exit status 1 (82.08278ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod-restore" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:565: kubectl --context addons-446299 logs task-pv-pod-restore -n default: exit status 1
addons_test.go:566: failed waiting for pod task-pv-pod-restore: app=task-pv-pod-restore within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-446299 -n addons-446299
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-446299 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-446299 logs -n 25: (1.351748513s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | -p download-only-675466                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-675466                                                                     | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| start   | -o=json --download-only                                                                     | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | -p download-only-363869                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-363869                                                                     | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-675466                                                                     | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-363869                                                                     | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-747965 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | binary-mirror-747965                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39359                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-747965                                                                     | binary-mirror-747965 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| addons  | enable dashboard -p                                                                         | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-446299 --wait=true                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:22 UTC | 20 Sep 24 18:22 UTC |
	|         | -p addons-446299                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | -p addons-446299                                                                            |                      |         |         |                     |                     |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-446299 ssh cat                                                                       | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | /opt/local-path-provisioner/pvc-11168afa-d97c-4581-90a8-f19b354e2c35_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
	|         | addons-446299                                                                               |                      |         |         |                     |                     |
	| ip      | addons-446299 ip                                                                            | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:24 UTC | 20 Sep 24 18:24 UTC |
	| addons  | addons-446299 addons disable                                                                | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:24 UTC | 20 Sep 24 18:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-446299 addons                                                                        | addons-446299        | jenkins | v1.34.0 | 20 Sep 24 18:28 UTC | 20 Sep 24 18:28 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:12:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:12:45.452837  749135 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:12:45.452957  749135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:45.452966  749135 out.go:358] Setting ErrFile to fd 2...
	I0920 18:12:45.452970  749135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:45.453156  749135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:12:45.453777  749135 out.go:352] Setting JSON to false
	I0920 18:12:45.454793  749135 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6915,"bootTime":1726849050,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:12:45.454907  749135 start.go:139] virtualization: kvm guest
	I0920 18:12:45.457071  749135 out.go:177] * [addons-446299] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:12:45.458344  749135 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:12:45.458335  749135 notify.go:220] Checking for updates...
	I0920 18:12:45.459761  749135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:12:45.461106  749135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:12:45.462449  749135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:45.463737  749135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:12:45.465084  749135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:12:45.466379  749135 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:12:45.497434  749135 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:12:45.498519  749135 start.go:297] selected driver: kvm2
	I0920 18:12:45.498542  749135 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:12:45.498561  749135 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:12:45.499322  749135 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:12:45.499411  749135 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:12:45.513921  749135 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:12:45.513966  749135 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:12:45.514272  749135 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:12:45.514314  749135 cni.go:84] Creating CNI manager for ""
	I0920 18:12:45.514372  749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:12:45.514386  749135 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:12:45.514458  749135 start.go:340] cluster config:
	{Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:12:45.514600  749135 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:12:45.516315  749135 out.go:177] * Starting "addons-446299" primary control-plane node in "addons-446299" cluster
	I0920 18:12:45.517423  749135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:12:45.517447  749135 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:12:45.517459  749135 cache.go:56] Caching tarball of preloaded images
	I0920 18:12:45.517543  749135 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:12:45.517552  749135 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:12:45.517857  749135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json ...
	I0920 18:12:45.517880  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json: {Name:mkaa7e3a2b8a2d95cecdc721e4fd7f5310773e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:12:45.518032  749135 start.go:360] acquireMachinesLock for addons-446299: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:12:45.518095  749135 start.go:364] duration metric: took 46.763µs to acquireMachinesLock for "addons-446299"
	I0920 18:12:45.518131  749135 start.go:93] Provisioning new machine with config: &{Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:12:45.518195  749135 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:12:45.520537  749135 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 18:12:45.520688  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:12:45.520727  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:45.535639  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0920 18:12:45.536170  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:45.536786  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:12:45.536808  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:45.537162  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:45.537383  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:12:45.537540  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:12:45.537694  749135 start.go:159] libmachine.API.Create for "addons-446299" (driver="kvm2")
	I0920 18:12:45.537726  749135 client.go:168] LocalClient.Create starting
	I0920 18:12:45.537791  749135 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:12:45.635672  749135 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:12:45.854167  749135 main.go:141] libmachine: Running pre-create checks...
	I0920 18:12:45.854195  749135 main.go:141] libmachine: (addons-446299) Calling .PreCreateCheck
	I0920 18:12:45.854768  749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
	I0920 18:12:45.855238  749135 main.go:141] libmachine: Creating machine...
	I0920 18:12:45.855256  749135 main.go:141] libmachine: (addons-446299) Calling .Create
	I0920 18:12:45.855444  749135 main.go:141] libmachine: (addons-446299) Creating KVM machine...
	I0920 18:12:45.856800  749135 main.go:141] libmachine: (addons-446299) DBG | found existing default KVM network
	I0920 18:12:45.857584  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:45.857437  749157 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bb0}
	I0920 18:12:45.857661  749135 main.go:141] libmachine: (addons-446299) DBG | created network xml: 
	I0920 18:12:45.857685  749135 main.go:141] libmachine: (addons-446299) DBG | <network>
	I0920 18:12:45.857700  749135 main.go:141] libmachine: (addons-446299) DBG |   <name>mk-addons-446299</name>
	I0920 18:12:45.857710  749135 main.go:141] libmachine: (addons-446299) DBG |   <dns enable='no'/>
	I0920 18:12:45.857722  749135 main.go:141] libmachine: (addons-446299) DBG |   
	I0920 18:12:45.857736  749135 main.go:141] libmachine: (addons-446299) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:12:45.857749  749135 main.go:141] libmachine: (addons-446299) DBG |     <dhcp>
	I0920 18:12:45.857762  749135 main.go:141] libmachine: (addons-446299) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:12:45.857774  749135 main.go:141] libmachine: (addons-446299) DBG |     </dhcp>
	I0920 18:12:45.857784  749135 main.go:141] libmachine: (addons-446299) DBG |   </ip>
	I0920 18:12:45.857795  749135 main.go:141] libmachine: (addons-446299) DBG |   
	I0920 18:12:45.857805  749135 main.go:141] libmachine: (addons-446299) DBG | </network>
	I0920 18:12:45.857817  749135 main.go:141] libmachine: (addons-446299) DBG | 
	I0920 18:12:45.862810  749135 main.go:141] libmachine: (addons-446299) DBG | trying to create private KVM network mk-addons-446299 192.168.39.0/24...
	I0920 18:12:45.928127  749135 main.go:141] libmachine: (addons-446299) DBG | private KVM network mk-addons-446299 192.168.39.0/24 created
	I0920 18:12:45.928216  749135 main.go:141] libmachine: (addons-446299) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 ...
	I0920 18:12:45.928243  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:45.928106  749157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:45.928255  749135 main.go:141] libmachine: (addons-446299) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:12:45.928282  749135 main.go:141] libmachine: (addons-446299) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:12:46.198371  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.198204  749157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa...
	I0920 18:12:46.306630  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.306482  749157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/addons-446299.rawdisk...
	I0920 18:12:46.306662  749135 main.go:141] libmachine: (addons-446299) DBG | Writing magic tar header
	I0920 18:12:46.306673  749135 main.go:141] libmachine: (addons-446299) DBG | Writing SSH key tar header
	I0920 18:12:46.306681  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.306605  749157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 ...
	I0920 18:12:46.306695  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299
	I0920 18:12:46.306758  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 (perms=drwx------)
	I0920 18:12:46.306798  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:12:46.306816  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:12:46.306825  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:12:46.306839  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:46.306872  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:12:46.306884  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:12:46.306904  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:12:46.306929  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:12:46.306939  749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:12:46.306952  749135 main.go:141] libmachine: (addons-446299) Creating domain...
	I0920 18:12:46.306963  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:12:46.306969  749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home
	I0920 18:12:46.306976  749135 main.go:141] libmachine: (addons-446299) DBG | Skipping /home - not owner
	I0920 18:12:46.308063  749135 main.go:141] libmachine: (addons-446299) define libvirt domain using xml: 
	I0920 18:12:46.308090  749135 main.go:141] libmachine: (addons-446299) <domain type='kvm'>
	I0920 18:12:46.308100  749135 main.go:141] libmachine: (addons-446299)   <name>addons-446299</name>
	I0920 18:12:46.308107  749135 main.go:141] libmachine: (addons-446299)   <memory unit='MiB'>4000</memory>
	I0920 18:12:46.308114  749135 main.go:141] libmachine: (addons-446299)   <vcpu>2</vcpu>
	I0920 18:12:46.308128  749135 main.go:141] libmachine: (addons-446299)   <features>
	I0920 18:12:46.308136  749135 main.go:141] libmachine: (addons-446299)     <acpi/>
	I0920 18:12:46.308144  749135 main.go:141] libmachine: (addons-446299)     <apic/>
	I0920 18:12:46.308150  749135 main.go:141] libmachine: (addons-446299)     <pae/>
	I0920 18:12:46.308156  749135 main.go:141] libmachine: (addons-446299)     
	I0920 18:12:46.308161  749135 main.go:141] libmachine: (addons-446299)   </features>
	I0920 18:12:46.308167  749135 main.go:141] libmachine: (addons-446299)   <cpu mode='host-passthrough'>
	I0920 18:12:46.308172  749135 main.go:141] libmachine: (addons-446299)   
	I0920 18:12:46.308184  749135 main.go:141] libmachine: (addons-446299)   </cpu>
	I0920 18:12:46.308194  749135 main.go:141] libmachine: (addons-446299)   <os>
	I0920 18:12:46.308203  749135 main.go:141] libmachine: (addons-446299)     <type>hvm</type>
	I0920 18:12:46.308221  749135 main.go:141] libmachine: (addons-446299)     <boot dev='cdrom'/>
	I0920 18:12:46.308234  749135 main.go:141] libmachine: (addons-446299)     <boot dev='hd'/>
	I0920 18:12:46.308243  749135 main.go:141] libmachine: (addons-446299)     <bootmenu enable='no'/>
	I0920 18:12:46.308250  749135 main.go:141] libmachine: (addons-446299)   </os>
	I0920 18:12:46.308255  749135 main.go:141] libmachine: (addons-446299)   <devices>
	I0920 18:12:46.308262  749135 main.go:141] libmachine: (addons-446299)     <disk type='file' device='cdrom'>
	I0920 18:12:46.308277  749135 main.go:141] libmachine: (addons-446299)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/boot2docker.iso'/>
	I0920 18:12:46.308290  749135 main.go:141] libmachine: (addons-446299)       <target dev='hdc' bus='scsi'/>
	I0920 18:12:46.308302  749135 main.go:141] libmachine: (addons-446299)       <readonly/>
	I0920 18:12:46.308312  749135 main.go:141] libmachine: (addons-446299)     </disk>
	I0920 18:12:46.308324  749135 main.go:141] libmachine: (addons-446299)     <disk type='file' device='disk'>
	I0920 18:12:46.308335  749135 main.go:141] libmachine: (addons-446299)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:12:46.308350  749135 main.go:141] libmachine: (addons-446299)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/addons-446299.rawdisk'/>
	I0920 18:12:46.308364  749135 main.go:141] libmachine: (addons-446299)       <target dev='hda' bus='virtio'/>
	I0920 18:12:46.308376  749135 main.go:141] libmachine: (addons-446299)     </disk>
	I0920 18:12:46.308386  749135 main.go:141] libmachine: (addons-446299)     <interface type='network'>
	I0920 18:12:46.308395  749135 main.go:141] libmachine: (addons-446299)       <source network='mk-addons-446299'/>
	I0920 18:12:46.308404  749135 main.go:141] libmachine: (addons-446299)       <model type='virtio'/>
	I0920 18:12:46.308414  749135 main.go:141] libmachine: (addons-446299)     </interface>
	I0920 18:12:46.308424  749135 main.go:141] libmachine: (addons-446299)     <interface type='network'>
	I0920 18:12:46.308440  749135 main.go:141] libmachine: (addons-446299)       <source network='default'/>
	I0920 18:12:46.308454  749135 main.go:141] libmachine: (addons-446299)       <model type='virtio'/>
	I0920 18:12:46.308462  749135 main.go:141] libmachine: (addons-446299)     </interface>
	I0920 18:12:46.308467  749135 main.go:141] libmachine: (addons-446299)     <serial type='pty'>
	I0920 18:12:46.308472  749135 main.go:141] libmachine: (addons-446299)       <target port='0'/>
	I0920 18:12:46.308478  749135 main.go:141] libmachine: (addons-446299)     </serial>
	I0920 18:12:46.308486  749135 main.go:141] libmachine: (addons-446299)     <console type='pty'>
	I0920 18:12:46.308493  749135 main.go:141] libmachine: (addons-446299)       <target type='serial' port='0'/>
	I0920 18:12:46.308498  749135 main.go:141] libmachine: (addons-446299)     </console>
	I0920 18:12:46.308504  749135 main.go:141] libmachine: (addons-446299)     <rng model='virtio'>
	I0920 18:12:46.308512  749135 main.go:141] libmachine: (addons-446299)       <backend model='random'>/dev/random</backend>
	I0920 18:12:46.308518  749135 main.go:141] libmachine: (addons-446299)     </rng>
	I0920 18:12:46.308522  749135 main.go:141] libmachine: (addons-446299)     
	I0920 18:12:46.308528  749135 main.go:141] libmachine: (addons-446299)     
	I0920 18:12:46.308544  749135 main.go:141] libmachine: (addons-446299)   </devices>
	I0920 18:12:46.308556  749135 main.go:141] libmachine: (addons-446299) </domain>
	I0920 18:12:46.308574  749135 main.go:141] libmachine: (addons-446299) 
	I0920 18:12:46.314191  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:13:6e:16 in network default
	I0920 18:12:46.314696  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:46.314712  749135 main.go:141] libmachine: (addons-446299) Ensuring networks are active...
	I0920 18:12:46.315254  749135 main.go:141] libmachine: (addons-446299) Ensuring network default is active
	I0920 18:12:46.315494  749135 main.go:141] libmachine: (addons-446299) Ensuring network mk-addons-446299 is active
	I0920 18:12:46.315890  749135 main.go:141] libmachine: (addons-446299) Getting domain xml...
	I0920 18:12:46.316428  749135 main.go:141] libmachine: (addons-446299) Creating domain...
	I0920 18:12:47.702575  749135 main.go:141] libmachine: (addons-446299) Waiting to get IP...
	I0920 18:12:47.703586  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:47.704120  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:47.704148  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:47.704086  749157 retry.go:31] will retry after 271.659022ms: waiting for machine to come up
	I0920 18:12:47.977759  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:47.978244  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:47.978271  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:47.978199  749157 retry.go:31] will retry after 286.269777ms: waiting for machine to come up
	I0920 18:12:48.265706  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:48.266154  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:48.266176  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:48.266104  749157 retry.go:31] will retry after 302.528012ms: waiting for machine to come up
	I0920 18:12:48.570875  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:48.571362  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:48.571386  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:48.571312  749157 retry.go:31] will retry after 579.846713ms: waiting for machine to come up
	I0920 18:12:49.153045  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:49.153478  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:49.153506  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:49.153418  749157 retry.go:31] will retry after 501.770816ms: waiting for machine to come up
	I0920 18:12:49.657032  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:49.657383  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:49.657410  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:49.657355  749157 retry.go:31] will retry after 903.967154ms: waiting for machine to come up
	I0920 18:12:50.562781  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:50.563350  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:50.563375  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:50.563286  749157 retry.go:31] will retry after 1.03177474s: waiting for machine to come up
	I0920 18:12:51.596424  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:51.596850  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:51.596971  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:51.596890  749157 retry.go:31] will retry after 1.278733336s: waiting for machine to come up
	I0920 18:12:52.877368  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:52.877732  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:52.877761  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:52.877690  749157 retry.go:31] will retry after 1.241144447s: waiting for machine to come up
	I0920 18:12:54.121228  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:54.121598  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:54.121623  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:54.121564  749157 retry.go:31] will retry after 2.253509148s: waiting for machine to come up
	I0920 18:12:56.377139  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:56.377598  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:56.377630  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:56.377537  749157 retry.go:31] will retry after 2.563830681s: waiting for machine to come up
	I0920 18:12:58.944264  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:12:58.944679  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:12:58.944723  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:58.944624  749157 retry.go:31] will retry after 2.392098661s: waiting for machine to come up
	I0920 18:13:01.339634  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:01.340032  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:13:01.340088  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:13:01.339990  749157 retry.go:31] will retry after 2.800869076s: waiting for machine to come up
	I0920 18:13:04.142006  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:04.142476  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
	I0920 18:13:04.142500  749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:13:04.142411  749157 retry.go:31] will retry after 4.101773144s: waiting for machine to come up
	I0920 18:13:08.247401  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.247831  749135 main.go:141] libmachine: (addons-446299) Found IP for machine: 192.168.39.237
	I0920 18:13:08.247867  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has current primary IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.247875  749135 main.go:141] libmachine: (addons-446299) Reserving static IP address...
	I0920 18:13:08.248197  749135 main.go:141] libmachine: (addons-446299) DBG | unable to find host DHCP lease matching {name: "addons-446299", mac: "52:54:00:33:9c:3e", ip: "192.168.39.237"} in network mk-addons-446299
	I0920 18:13:08.320366  749135 main.go:141] libmachine: (addons-446299) DBG | Getting to WaitForSSH function...
	I0920 18:13:08.320400  749135 main.go:141] libmachine: (addons-446299) Reserved static IP address: 192.168.39.237
	I0920 18:13:08.320413  749135 main.go:141] libmachine: (addons-446299) Waiting for SSH to be available...
	I0920 18:13:08.323450  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.323840  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.323876  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.324043  749135 main.go:141] libmachine: (addons-446299) DBG | Using SSH client type: external
	I0920 18:13:08.324075  749135 main.go:141] libmachine: (addons-446299) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa (-rw-------)
	I0920 18:13:08.324116  749135 main.go:141] libmachine: (addons-446299) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.237 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:13:08.324134  749135 main.go:141] libmachine: (addons-446299) DBG | About to run SSH command:
	I0920 18:13:08.324145  749135 main.go:141] libmachine: (addons-446299) DBG | exit 0
	I0920 18:13:08.447247  749135 main.go:141] libmachine: (addons-446299) DBG | SSH cmd err, output: <nil>: 
	I0920 18:13:08.447526  749135 main.go:141] libmachine: (addons-446299) KVM machine creation complete!
	I0920 18:13:08.447847  749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
	I0920 18:13:08.448509  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:08.448699  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:08.448836  749135 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:13:08.448855  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:08.450187  749135 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:13:08.450200  749135 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:13:08.450206  749135 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:13:08.450212  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.452411  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.452723  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.452751  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.452850  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.453019  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.453174  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.453318  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.453492  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.453697  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.453711  749135 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:13:08.550007  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:13:08.550034  749135 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:13:08.550043  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.552709  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.553024  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.553055  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.553193  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.553387  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.553523  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.553628  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.553820  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.554035  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.554048  749135 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:13:08.651415  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:13:08.651508  749135 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:13:08.651519  749135 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:13:08.651527  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:13:08.651799  749135 buildroot.go:166] provisioning hostname "addons-446299"
	I0920 18:13:08.651833  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:13:08.652051  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.654630  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.654993  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.655016  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.655142  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.655325  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.655472  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.655580  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.655728  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.655930  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.655944  749135 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-446299 && echo "addons-446299" | sudo tee /etc/hostname
	I0920 18:13:08.764545  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-446299
	
	I0920 18:13:08.764579  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.767492  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.767918  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.767944  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.768198  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:08.768402  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.768591  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:08.768737  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:08.768929  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:08.769151  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:08.769174  749135 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-446299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-446299/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-446299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:13:08.875844  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:13:08.875886  749135 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:13:08.875933  749135 buildroot.go:174] setting up certificates
	I0920 18:13:08.875949  749135 provision.go:84] configureAuth start
	I0920 18:13:08.875963  749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
	I0920 18:13:08.876262  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:08.878744  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.879098  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.879119  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.879270  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:08.881403  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.881836  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:08.881865  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:08.881970  749135 provision.go:143] copyHostCerts
	I0920 18:13:08.882095  749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:13:08.882283  749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:13:08.882377  749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:13:08.882472  749135 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.addons-446299 san=[127.0.0.1 192.168.39.237 addons-446299 localhost minikube]
	I0920 18:13:09.208189  749135 provision.go:177] copyRemoteCerts
	I0920 18:13:09.208279  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:13:09.208315  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.211040  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.211327  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.211351  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.211544  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.211780  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.211947  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.212123  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.297180  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:13:09.320798  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:13:09.344012  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:13:09.366859  749135 provision.go:87] duration metric: took 490.878212ms to configureAuth
	I0920 18:13:09.366893  749135 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:13:09.367101  749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:13:09.367184  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.369576  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.369868  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.369896  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.370087  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.370268  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.370416  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.370568  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.370692  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:09.370898  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:09.370918  749135 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:13:09.580901  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:13:09.580930  749135 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:13:09.580938  749135 main.go:141] libmachine: (addons-446299) Calling .GetURL
	I0920 18:13:09.582415  749135 main.go:141] libmachine: (addons-446299) DBG | Using libvirt version 6000000
	I0920 18:13:09.584573  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.584892  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.584919  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.585053  749135 main.go:141] libmachine: Docker is up and running!
	I0920 18:13:09.585065  749135 main.go:141] libmachine: Reticulating splines...
	I0920 18:13:09.585073  749135 client.go:171] duration metric: took 24.047336599s to LocalClient.Create
	I0920 18:13:09.585100  749135 start.go:167] duration metric: took 24.047408021s to libmachine.API.Create "addons-446299"
	I0920 18:13:09.585116  749135 start.go:293] postStartSetup for "addons-446299" (driver="kvm2")
	I0920 18:13:09.585129  749135 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:13:09.585147  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.585408  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:13:09.585435  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.587350  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.587666  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.587695  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.587795  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.587993  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.588132  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.588235  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.664940  749135 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:13:09.669300  749135 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:13:09.669326  749135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:13:09.669399  749135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:13:09.669426  749135 start.go:296] duration metric: took 84.302482ms for postStartSetup
	I0920 18:13:09.669464  749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
	I0920 18:13:09.670097  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:09.672635  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.673027  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.673059  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.673292  749135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json ...
	I0920 18:13:09.673507  749135 start.go:128] duration metric: took 24.155298051s to createHost
	I0920 18:13:09.673535  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.675782  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.676085  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.676118  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.676239  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.676425  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.676577  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.676704  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.676850  749135 main.go:141] libmachine: Using SSH client type: native
	I0920 18:13:09.677016  749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0920 18:13:09.677026  749135 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:13:09.775435  749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855989.751621835
	
	I0920 18:13:09.775464  749135 fix.go:216] guest clock: 1726855989.751621835
	I0920 18:13:09.775474  749135 fix.go:229] Guest: 2024-09-20 18:13:09.751621835 +0000 UTC Remote: 2024-09-20 18:13:09.673520947 +0000 UTC m=+24.255782208 (delta=78.100888ms)
	I0920 18:13:09.775526  749135 fix.go:200] guest clock delta is within tolerance: 78.100888ms
	I0920 18:13:09.775540  749135 start.go:83] releasing machines lock for "addons-446299", held for 24.257428579s
	I0920 18:13:09.775567  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.775862  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:09.778659  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.779012  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.779037  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.779220  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.779691  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.779841  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:09.779938  749135 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:13:09.779984  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.780090  749135 ssh_runner.go:195] Run: cat /version.json
	I0920 18:13:09.780115  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:09.782348  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.782682  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.782703  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.782721  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.782827  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.783033  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.783120  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:09.783141  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:09.783235  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.783325  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:09.783381  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.783467  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:09.783589  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:09.783728  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:09.855541  749135 ssh_runner.go:195] Run: systemctl --version
	I0920 18:13:09.885114  749135 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:13:10.038473  749135 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:13:10.044604  749135 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:13:10.044673  749135 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:13:10.061773  749135 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:13:10.061802  749135 start.go:495] detecting cgroup driver to use...
	I0920 18:13:10.061871  749135 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:13:10.078163  749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:13:10.092123  749135 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:13:10.092186  749135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:13:10.105354  749135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:13:10.118581  749135 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:13:10.228500  749135 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:13:10.385243  749135 docker.go:233] disabling docker service ...
	I0920 18:13:10.385317  749135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:13:10.399346  749135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:13:10.411799  749135 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:13:10.532538  749135 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:13:10.657590  749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:13:10.672417  749135 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:13:10.690910  749135 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:13:10.690989  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.701918  749135 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:13:10.702004  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.712909  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.723847  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.734707  749135 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:13:10.745859  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.756720  749135 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.781698  749135 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:13:10.792301  749135 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:13:10.801512  749135 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:13:10.801614  749135 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:13:10.815061  749135 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:13:10.824568  749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:13:10.942263  749135 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:13:11.344964  749135 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:13:11.345085  749135 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:13:11.350594  749135 start.go:563] Will wait 60s for crictl version
	I0920 18:13:11.350677  749135 ssh_runner.go:195] Run: which crictl
	I0920 18:13:11.354600  749135 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:13:11.392003  749135 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:13:11.392112  749135 ssh_runner.go:195] Run: crio --version
	I0920 18:13:11.424468  749135 ssh_runner.go:195] Run: crio --version
	I0920 18:13:11.468344  749135 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:13:11.469889  749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
	I0920 18:13:11.472633  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:11.472955  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:11.472986  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:11.473236  749135 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:13:11.477639  749135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:13:11.490126  749135 kubeadm.go:883] updating cluster {Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:13:11.490246  749135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:13:11.490303  749135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:13:11.522179  749135 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:13:11.522257  749135 ssh_runner.go:195] Run: which lz4
	I0920 18:13:11.526368  749135 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:13:11.530534  749135 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:13:11.530569  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:13:12.754100  749135 crio.go:462] duration metric: took 1.227762585s to copy over tarball
	I0920 18:13:12.754195  749135 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:13:14.814758  749135 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060523421s)
	I0920 18:13:14.814798  749135 crio.go:469] duration metric: took 2.06066428s to extract the tarball
	I0920 18:13:14.814808  749135 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:13:14.850931  749135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:13:14.892855  749135 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:13:14.892884  749135 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:13:14.892894  749135 kubeadm.go:934] updating node { 192.168.39.237 8443 v1.31.1 crio true true} ...
	I0920 18:13:14.893002  749135 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-446299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:13:14.893069  749135 ssh_runner.go:195] Run: crio config
	I0920 18:13:14.935948  749135 cni.go:84] Creating CNI manager for ""
	I0920 18:13:14.935974  749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:13:14.935987  749135 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:13:14.936010  749135 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-446299 NodeName:addons-446299 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:13:14.936153  749135 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-446299"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:13:14.936224  749135 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:13:14.945879  749135 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:13:14.945951  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:13:14.955112  749135 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:13:14.971443  749135 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:13:14.987494  749135 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0920 18:13:15.004128  749135 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I0920 18:13:15.008311  749135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:13:15.020386  749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:13:15.143207  749135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:13:15.160928  749135 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299 for IP: 192.168.39.237
	I0920 18:13:15.160952  749135 certs.go:194] generating shared ca certs ...
	I0920 18:13:15.160971  749135 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.161127  749135 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:13:15.288325  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt ...
	I0920 18:13:15.288359  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt: {Name:mkd07e710befe398f359697123be87266dbb73cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.288526  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key ...
	I0920 18:13:15.288537  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key: {Name:mk8452559729a4e6fe54cdcaa3db5cb2d03b365d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.288610  749135 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:13:15.460720  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt ...
	I0920 18:13:15.460749  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt: {Name:mkd5912367400d11fe28d50162d9491c1c026ad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.460926  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key ...
	I0920 18:13:15.460946  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key: {Name:mk7b4a10567303413b299060d87451a86c82a4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.461047  749135 certs.go:256] generating profile certs ...
	I0920 18:13:15.461131  749135 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key
	I0920 18:13:15.461148  749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt with IP's: []
	I0920 18:13:15.666412  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt ...
	I0920 18:13:15.666455  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: {Name:mkef01489d7dcf2bfb46ac5af11bed50283fb691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.666668  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key ...
	I0920 18:13:15.666687  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key: {Name:mkce7236a454e2c0202c83ef853c169198fb2f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.666791  749135 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387
	I0920 18:13:15.666816  749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237]
	I0920 18:13:15.705625  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 ...
	I0920 18:13:15.705654  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387: {Name:mk64bf6bb73ff35990c8781efc3d30626dc3ca21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.705826  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387 ...
	I0920 18:13:15.705843  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387: {Name:mk18ead88f15a69013b31853d623fd0cb8c39466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.705941  749135 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt
	I0920 18:13:15.706040  749135 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key
	I0920 18:13:15.706114  749135 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key
	I0920 18:13:15.706140  749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt with IP's: []
	I0920 18:13:15.788260  749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt ...
	I0920 18:13:15.788293  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt: {Name:mk5ff8fc31363db98a0f0ca7278de49be24b8420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.788475  749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key ...
	I0920 18:13:15.788494  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key: {Name:mk7a90a72aaffce450a2196a523cb38d8ddfd4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:15.788714  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:13:15.788762  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:13:15.788796  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:13:15.788835  749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:13:15.789513  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:13:15.814280  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:13:15.838979  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:13:15.861251  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:13:15.883772  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:13:15.906899  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:13:15.930055  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:13:15.952960  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:13:15.976078  749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:13:15.998990  749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:13:16.015378  749135 ssh_runner.go:195] Run: openssl version
	I0920 18:13:16.021288  749135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:13:16.031743  749135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:13:16.036218  749135 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:13:16.036292  749135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:13:16.041983  749135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:13:16.052410  749135 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:13:16.056509  749135 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:13:16.056561  749135 kubeadm.go:392] StartCluster: {Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:13:16.056643  749135 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:13:16.056724  749135 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:13:16.093233  749135 cri.go:89] found id: ""
	I0920 18:13:16.093305  749135 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:13:16.103183  749135 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:13:16.112220  749135 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:13:16.121055  749135 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:13:16.121076  749135 kubeadm.go:157] found existing configuration files:
	
	I0920 18:13:16.121125  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:13:16.129727  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:13:16.129793  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:13:16.138769  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:13:16.147343  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:13:16.147401  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:13:16.156084  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:13:16.164356  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:13:16.164409  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:13:16.172957  749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:13:16.181269  749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:13:16.181319  749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:13:16.189971  749135 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:13:16.241816  749135 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:13:16.242023  749135 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:13:16.343705  749135 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:13:16.343865  749135 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:13:16.344016  749135 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:13:16.353422  749135 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:13:16.356505  749135 out.go:235]   - Generating certificates and keys ...
	I0920 18:13:16.356621  749135 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:13:16.356707  749135 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:13:16.567905  749135 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:13:16.678138  749135 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:13:16.903150  749135 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:13:17.220781  749135 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:13:17.330970  749135 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:13:17.331262  749135 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-446299 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0920 18:13:17.404562  749135 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:13:17.404723  749135 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-446299 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0920 18:13:17.558748  749135 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:13:17.723982  749135 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:13:17.850510  749135 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:13:17.850712  749135 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:13:17.910185  749135 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:13:18.072173  749135 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:13:18.135494  749135 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:13:18.547143  749135 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:13:18.760484  749135 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:13:18.761203  749135 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:13:18.765007  749135 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:13:18.801126  749135 out.go:235]   - Booting up control plane ...
	I0920 18:13:18.801251  749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:13:18.801344  749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:13:18.801424  749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:13:18.801571  749135 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:13:18.801721  749135 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:13:18.801785  749135 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:13:18.927609  749135 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:13:18.927774  749135 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:13:19.928576  749135 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001817815s
	I0920 18:13:19.928734  749135 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:13:24.427415  749135 kubeadm.go:310] [api-check] The API server is healthy after 4.501490258s
	I0920 18:13:24.439460  749135 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:13:24.456660  749135 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:13:24.489726  749135 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:13:24.489974  749135 kubeadm.go:310] [mark-control-plane] Marking the node addons-446299 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:13:24.502419  749135 kubeadm.go:310] [bootstrap-token] Using token: 2qbco4.c4cth5cwyyzw51bf
	I0920 18:13:24.503870  749135 out.go:235]   - Configuring RBAC rules ...
	I0920 18:13:24.504029  749135 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:13:24.514334  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:13:24.520831  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:13:24.524418  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:13:24.527658  749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:13:24.533751  749135 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:13:24.833210  749135 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:13:25.263206  749135 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:13:25.833304  749135 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:13:25.834184  749135 kubeadm.go:310] 
	I0920 18:13:25.834298  749135 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:13:25.834327  749135 kubeadm.go:310] 
	I0920 18:13:25.834438  749135 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:13:25.834450  749135 kubeadm.go:310] 
	I0920 18:13:25.834490  749135 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:13:25.834595  749135 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:13:25.834657  749135 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:13:25.834674  749135 kubeadm.go:310] 
	I0920 18:13:25.834745  749135 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:13:25.834754  749135 kubeadm.go:310] 
	I0920 18:13:25.834980  749135 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:13:25.834997  749135 kubeadm.go:310] 
	I0920 18:13:25.835059  749135 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:13:25.835163  749135 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:13:25.835253  749135 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:13:25.835263  749135 kubeadm.go:310] 
	I0920 18:13:25.835376  749135 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:13:25.835483  749135 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:13:25.835490  749135 kubeadm.go:310] 
	I0920 18:13:25.835595  749135 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2qbco4.c4cth5cwyyzw51bf \
	I0920 18:13:25.835757  749135 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d \
	I0920 18:13:25.835806  749135 kubeadm.go:310] 	--control-plane 
	I0920 18:13:25.835816  749135 kubeadm.go:310] 
	I0920 18:13:25.835914  749135 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:13:25.835926  749135 kubeadm.go:310] 
	I0920 18:13:25.836021  749135 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2qbco4.c4cth5cwyyzw51bf \
	I0920 18:13:25.836149  749135 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d 
	I0920 18:13:25.837593  749135 kubeadm.go:310] W0920 18:13:16.222475     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:13:25.837868  749135 kubeadm.go:310] W0920 18:13:16.223486     810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:13:25.837990  749135 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:13:25.838019  749135 cni.go:84] Creating CNI manager for ""
	I0920 18:13:25.838028  749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:13:25.839751  749135 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 18:13:25.840949  749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 18:13:25.852783  749135 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 18:13:25.871921  749135 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:13:25.871998  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:25.872010  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-446299 minikube.k8s.io/updated_at=2024_09_20T18_13_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-446299 minikube.k8s.io/primary=true
	I0920 18:13:25.893378  749135 ops.go:34] apiserver oom_adj: -16
	I0920 18:13:26.025723  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:26.526635  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:27.026038  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:27.526100  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:28.026195  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:28.526494  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:29.026560  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:29.526369  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:30.026015  749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:13:30.116670  749135 kubeadm.go:1113] duration metric: took 4.244739753s to wait for elevateKubeSystemPrivileges
	I0920 18:13:30.116706  749135 kubeadm.go:394] duration metric: took 14.06015239s to StartCluster
	I0920 18:13:30.116726  749135 settings.go:142] acquiring lock: {Name:mk0bd1e421bf437575c076c52c1ff2f74497a1ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:30.116861  749135 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:13:30.117227  749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/kubeconfig: {Name:mk275c54cf52b0ccdc22fcaa39c7b9c31092c648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:13:30.117422  749135 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:13:30.117448  749135 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:13:30.117512  749135 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 18:13:30.117640  749135 addons.go:69] Setting yakd=true in profile "addons-446299"
	I0920 18:13:30.117667  749135 addons.go:234] Setting addon yakd=true in "addons-446299"
	I0920 18:13:30.117700  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117727  749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:13:30.117688  749135 addons.go:69] Setting default-storageclass=true in profile "addons-446299"
	I0920 18:13:30.117804  749135 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-446299"
	I0920 18:13:30.117694  749135 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-446299"
	I0920 18:13:30.117828  749135 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-446299"
	I0920 18:13:30.117867  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117708  749135 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-446299"
	I0920 18:13:30.117998  749135 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-446299"
	I0920 18:13:30.117714  749135 addons.go:69] Setting inspektor-gadget=true in profile "addons-446299"
	I0920 18:13:30.118028  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.118044  749135 addons.go:234] Setting addon inspektor-gadget=true in "addons-446299"
	I0920 18:13:30.118082  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117716  749135 addons.go:69] Setting gcp-auth=true in profile "addons-446299"
	I0920 18:13:30.118200  749135 mustload.go:65] Loading cluster: addons-446299
	I0920 18:13:30.118199  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118219  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.117703  749135 addons.go:69] Setting ingress-dns=true in profile "addons-446299"
	I0920 18:13:30.118237  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118242  749135 addons.go:234] Setting addon ingress-dns=true in "addons-446299"
	I0920 18:13:30.118250  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118270  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.118376  749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:13:30.118380  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118401  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118492  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118530  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118647  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118678  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117720  749135 addons.go:69] Setting metrics-server=true in profile "addons-446299"
	I0920 18:13:30.118748  749135 addons.go:234] Setting addon metrics-server=true in "addons-446299"
	I0920 18:13:30.118777  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.118823  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118831  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.118883  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.118889  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117726  749135 addons.go:69] Setting ingress=true in profile "addons-446299"
	I0920 18:13:30.119096  749135 addons.go:234] Setting addon ingress=true in "addons-446299"
	I0920 18:13:30.119137  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117736  749135 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-446299"
	I0920 18:13:30.119353  749135 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-446299"
	I0920 18:13:30.119501  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.119521  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.119740  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.117735  749135 addons.go:69] Setting registry=true in profile "addons-446299"
	I0920 18:13:30.119761  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.119766  749135 addons.go:234] Setting addon registry=true in "addons-446299"
	I0920 18:13:30.119795  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.120169  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.120211  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117735  749135 addons.go:69] Setting cloud-spanner=true in profile "addons-446299"
	I0920 18:13:30.120247  749135 addons.go:234] Setting addon cloud-spanner=true in "addons-446299"
	I0920 18:13:30.117743  749135 addons.go:69] Setting volcano=true in profile "addons-446299"
	I0920 18:13:30.120264  749135 addons.go:234] Setting addon volcano=true in "addons-446299"
	I0920 18:13:30.120292  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.120352  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.117744  749135 addons.go:69] Setting storage-provisioner=true in profile "addons-446299"
	I0920 18:13:30.120495  749135 addons.go:234] Setting addon storage-provisioner=true in "addons-446299"
	I0920 18:13:30.120536  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.120768  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.120790  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.117753  749135 addons.go:69] Setting volumesnapshots=true in profile "addons-446299"
	I0920 18:13:30.120925  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.120933  749135 addons.go:234] Setting addon volumesnapshots=true in "addons-446299"
	I0920 18:13:30.120955  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.120966  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.122929  749135 out.go:177] * Verifying Kubernetes components...
	I0920 18:13:30.124310  749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:13:30.139606  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0920 18:13:30.139626  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43313
	I0920 18:13:30.139664  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
	I0920 18:13:30.139664  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0920 18:13:30.151212  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I0920 18:13:30.151245  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.151251  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34369
	I0920 18:13:30.151274  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.151393  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.151405  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.151438  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.151856  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.151891  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.152064  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152188  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152245  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152411  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152423  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.152487  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152534  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.152664  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152678  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.152736  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.152850  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152861  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.152984  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.152995  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.153048  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.153483  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.153515  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.154013  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0920 18:13:30.154291  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.154314  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.154382  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.154805  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.154867  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.155632  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.155794  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.155815  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.155882  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.156284  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.156326  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.159168  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.159296  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.159618  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.159652  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.159773  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.159808  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.160117  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.160143  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.160217  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.160647  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.161813  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.161856  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.164600  749135 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-446299"
	I0920 18:13:30.164649  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.165039  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.165072  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.176807  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I0920 18:13:30.177469  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.178091  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.178111  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.178583  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.179242  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.179271  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.185984  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43023
	I0920 18:13:30.186586  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.187123  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.187144  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.187554  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.188160  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.188203  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.193206  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0920 18:13:30.193417  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0920 18:13:30.193849  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.194099  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.194452  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.194471  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.194968  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.195118  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.195132  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.195349  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0920 18:13:30.195438  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.196077  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.196556  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.196580  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.197033  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.197694  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.197734  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.197960  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36057
	I0920 18:13:30.198500  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.198621  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.198726  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I0920 18:13:30.198876  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.199030  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.199369  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.199385  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.199416  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.199438  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.199710  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.200318  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.200362  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.200438  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.201288  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.201893  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.201916  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.203229  749135 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 18:13:30.204746  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 18:13:30.204766  749135 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 18:13:30.204788  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.206295  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.206675  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.207700  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I0920 18:13:30.208147  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.208668  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.208691  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.209400  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.209672  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.209714  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.210328  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.210357  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.210920  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.210948  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.211140  749135 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 18:13:30.211638  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.212145  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.212323  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.212494  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.212630  749135 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:13:30.212646  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 18:13:30.212664  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.213593  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0920 18:13:30.214660  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34213
	I0920 18:13:30.215405  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.215903  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.215924  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.216384  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.216437  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.216507  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.216537  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.216592  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0920 18:13:30.217041  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.217047  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.217305  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.217448  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.217585  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.218334  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.218356  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.218795  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.219018  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.219181  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0920 18:13:30.219880  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.219925  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.219979  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.220067  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.220460  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.220482  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.220702  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.220722  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.220787  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
	I0920 18:13:30.221095  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.221183  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.221329  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.221386  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:30.221397  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:30.223334  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.223352  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.223398  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:30.223412  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:30.223419  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:30.223427  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:30.223433  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:30.223529  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.224012  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:30.224041  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:30.224048  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 18:13:30.224154  749135 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 18:13:30.224543  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0920 18:13:30.225486  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.225509  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.226183  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.226202  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.226560  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 18:13:30.226986  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.227285  749135 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 18:13:30.227644  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.227684  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.228253  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34967
	I0920 18:13:30.228649  749135 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:13:30.228675  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 18:13:30.228697  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.229313  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
	I0920 18:13:30.229673  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.230049  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 18:13:30.230142  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.230158  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.230485  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.230672  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.231280  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.231806  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0920 18:13:30.231963  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.231988  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.232145  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.232332  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.232428  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.232440  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 18:13:30.232482  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.232696  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.233542  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.233796  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:13:30.234419  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.234438  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.234783  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 18:13:30.235010  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.235348  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.236127  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 18:13:30.236900  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0920 18:13:30.237440  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 18:13:30.237599  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0920 18:13:30.238719  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:13:30.239949  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 18:13:30.240129  749135 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:13:30.240146  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 18:13:30.240162  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.242347  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 18:13:30.243261  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.243644  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.243673  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.243908  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.244083  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.244194  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.244349  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.244407  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
	I0920 18:13:30.244610  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 18:13:30.245914  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 18:13:30.245941  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 18:13:30.245963  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.246673  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41943
	I0920 18:13:30.247429  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.247556  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.247990  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.248061  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.248074  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.248079  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.248343  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.248449  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.248449  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.248468  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.248596  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.248607  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.248648  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.248833  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.249170  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.249280  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.249352  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.249393  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.249409  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.250084  749135 addons.go:234] Setting addon default-storageclass=true in "addons-446299"
	I0920 18:13:30.250124  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:30.250508  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.250532  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.251170  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.251192  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.251274  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.251488  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.251857  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.251862  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.251910  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.251940  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.252078  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.252212  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.252224  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.252440  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.252553  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.252748  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.252820  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.252833  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.253735  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.253941  749135 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 18:13:30.254017  749135 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 18:13:30.253980  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.254455  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.254656  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.254870  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.254873  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.255177  749135 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:13:30.255187  749135 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 18:13:30.255205  749135 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 18:13:30.255226  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.255274  749135 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 18:13:30.255278  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.255288  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 18:13:30.255303  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.256466  749135 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 18:13:30.256532  749135 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:13:30.256552  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:13:30.256570  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.258154  749135 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 18:13:30.259159  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 18:13:30.259174  749135 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 18:13:30.259188  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.259235  749135 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 18:13:30.260368  749135 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 18:13:30.260382  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 18:13:30.260394  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.260519  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.260844  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.260873  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.261038  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.261196  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.262948  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.263013  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.263033  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.263050  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.263161  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.263545  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.263701  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.264179  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.264417  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.264628  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.265340  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.265500  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.265732  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.265751  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.266060  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.266249  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.266266  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.266441  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.266593  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.266625  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.266670  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.266742  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.267063  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.267118  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.267232  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.267247  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.267357  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.267382  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.267549  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.267839  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.269511  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0920 18:13:30.269878  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.270901  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.270926  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.271296  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.271468  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.273221  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.274917  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0920 18:13:30.275136  749135 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 18:13:30.275446  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.276076  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.276096  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.276414  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:13:30.276440  749135 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:13:30.276461  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.276501  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.276736  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.278674  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.280057  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.280316  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.280342  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.280375  749135 out.go:177]   - Using image docker.io/busybox:stable
	I0920 18:13:30.280530  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.280706  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.280828  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.280961  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	W0920 18:13:30.281845  749135 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35600->192.168.39.237:22: read: connection reset by peer
	I0920 18:13:30.281937  749135 retry.go:31] will retry after 148.234221ms: ssh: handshake failed: read tcp 192.168.39.1:35600->192.168.39.237:22: read: connection reset by peer
	I0920 18:13:30.282766  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37633
	I0920 18:13:30.282794  749135 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 18:13:30.283193  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.283743  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.283764  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.284120  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.284286  749135 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:13:30.284302  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 18:13:30.284319  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.284696  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:30.284848  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:30.290962  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.290998  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.291015  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.291035  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.291443  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.291607  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.291761  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.301013  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0920 18:13:30.301540  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:30.302060  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:30.302090  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:30.302449  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:30.302621  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:30.303997  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:30.304220  749135 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:13:30.304236  749135 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:13:30.304256  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:30.307237  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.307715  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:30.307749  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:30.307899  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:30.308079  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:30.308237  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:30.308392  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:30.604495  749135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:13:30.604525  749135 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:13:30.661112  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 18:13:30.661146  749135 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 18:13:30.662437  749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 18:13:30.662469  749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 18:13:30.705589  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:13:30.750149  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 18:13:30.750187  749135 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 18:13:30.753172  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 18:13:30.755196  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:13:30.771513  749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 18:13:30.771540  749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 18:13:30.797810  749135 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 18:13:30.797835  749135 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 18:13:30.807101  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:13:30.868448  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:13:30.869944  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 18:13:30.869963  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 18:13:30.871146  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:13:30.896462  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:13:30.900930  749135 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 18:13:30.900959  749135 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 18:13:30.906831  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:13:30.906880  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 18:13:30.933744  749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 18:13:30.933774  749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 18:13:30.969038  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 18:13:30.969076  749135 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 18:13:31.000321  749135 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 18:13:31.000354  749135 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 18:13:31.182228  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 18:13:31.182256  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 18:13:31.198470  749135 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:13:31.198506  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 18:13:31.232002  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 18:13:31.232027  749135 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 18:13:31.241138  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:13:31.241162  749135 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:13:31.303359  749135 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:13:31.303389  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 18:13:31.308659  749135 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 18:13:31.308686  749135 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 18:13:31.411918  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:13:31.444332  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 18:13:31.444368  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 18:13:31.517643  749135 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:13:31.517669  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 18:13:31.522528  749135 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 18:13:31.522555  749135 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 18:13:31.527932  749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:13:31.527961  749135 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:13:31.598680  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:13:31.753266  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 18:13:31.753305  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 18:13:31.825090  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:13:31.868789  749135 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 18:13:31.868821  749135 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 18:13:31.871872  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:13:32.035165  749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 18:13:32.035205  749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 18:13:32.325034  749135 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 18:13:32.325068  749135 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 18:13:32.426301  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 18:13:32.426330  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 18:13:32.734227  749135 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:13:32.734252  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 18:13:32.776162  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 18:13:32.776201  749135 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 18:13:32.973816  749135 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.369238207s)
	I0920 18:13:32.973844  749135 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.369303036s)
	I0920 18:13:32.973868  749135 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 18:13:32.974717  749135 node_ready.go:35] waiting up to 6m0s for node "addons-446299" to be "Ready" ...
	I0920 18:13:32.978640  749135 node_ready.go:49] node "addons-446299" has status "Ready":"True"
	I0920 18:13:32.978660  749135 node_ready.go:38] duration metric: took 3.921107ms for node "addons-446299" to be "Ready" ...
	I0920 18:13:32.978672  749135 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:13:32.990987  749135 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:33.092955  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:13:33.125330  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 18:13:33.125357  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 18:13:33.271505  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 18:13:33.271534  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 18:13:33.497723  749135 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-446299" context rescaled to 1 replicas
	I0920 18:13:33.600812  749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:13:33.600847  749135 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 18:13:33.656016  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.902807697s)
	I0920 18:13:33.656075  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656075  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.900839477s)
	I0920 18:13:33.656016  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.950386811s)
	I0920 18:13:33.656109  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656121  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656127  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656090  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656146  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656567  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.656587  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.656608  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.656624  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.656627  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.656653  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.656665  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656676  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656635  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.656718  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656637  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.656744  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.656760  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:33.656767  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.656730  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:33.657076  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.657118  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:33.657119  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.657096  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.657156  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.657263  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:33.657279  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:33.758218  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:13:35.015799  749135 pod_ready.go:103] pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:35.494820  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.687683083s)
	I0920 18:13:35.494889  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.494891  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.626405857s)
	I0920 18:13:35.494920  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.494932  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.494930  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.623755287s)
	I0920 18:13:35.494950  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.494983  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.495052  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.495370  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.495388  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.495396  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.495404  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.496899  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.496907  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.496907  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.496946  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.496958  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.496966  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.496977  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.496990  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.496999  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.497065  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.497077  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.497089  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.497098  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.497258  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.497276  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.498278  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:35.498290  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.498301  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.545445  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.545475  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.545718  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.545745  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.545752  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	W0920 18:13:35.545859  749135 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 18:13:35.559802  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:35.559831  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:35.560074  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:35.560092  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:35.560108  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:36.023603  749135 pod_ready.go:93] pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.023630  749135 pod_ready.go:82] duration metric: took 3.032619357s for pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.023643  749135 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.059659  749135 pod_ready.go:93] pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.059693  749135 pod_ready.go:82] duration metric: took 36.040161ms for pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.059705  749135 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.075393  749135 pod_ready.go:93] pod "etcd-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.075428  749135 pod_ready.go:82] duration metric: took 15.714418ms for pod "etcd-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.075441  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.089509  749135 pod_ready.go:93] pod "kube-apiserver-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.089536  749135 pod_ready.go:82] duration metric: took 14.086774ms for pod "kube-apiserver-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.089546  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.600534  749135 pod_ready.go:93] pod "kube-controller-manager-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.600565  749135 pod_ready.go:82] duration metric: took 511.011851ms for pod "kube-controller-manager-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.600579  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9pcgb" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.797080  749135 pod_ready.go:93] pod "kube-proxy-9pcgb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:36.797111  749135 pod_ready.go:82] duration metric: took 196.523175ms for pod "kube-proxy-9pcgb" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:36.797123  749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:37.195153  749135 pod_ready.go:93] pod "kube-scheduler-addons-446299" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:37.195185  749135 pod_ready.go:82] duration metric: took 398.053895ms for pod "kube-scheduler-addons-446299" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:37.195198  749135 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:37.260708  749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 18:13:37.260749  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:37.264035  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.264543  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:37.264579  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.264739  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:37.264958  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:37.265141  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:37.265285  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:37.472764  749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 18:13:37.656998  749135 addons.go:234] Setting addon gcp-auth=true in "addons-446299"
	I0920 18:13:37.657072  749135 host.go:66] Checking if "addons-446299" exists ...
	I0920 18:13:37.657494  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:37.657545  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:37.673709  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0920 18:13:37.674398  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:37.674958  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:37.674981  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:37.675363  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:37.675843  749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:13:37.675888  749135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:13:37.691444  749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38543
	I0920 18:13:37.692042  749135 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:13:37.692560  749135 main.go:141] libmachine: Using API Version  1
	I0920 18:13:37.692593  749135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:13:37.693006  749135 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:13:37.693249  749135 main.go:141] libmachine: (addons-446299) Calling .GetState
	I0920 18:13:37.695166  749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
	I0920 18:13:37.695451  749135 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 18:13:37.695481  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
	I0920 18:13:37.698450  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.698921  749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
	I0920 18:13:37.698953  749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
	I0920 18:13:37.699128  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
	I0920 18:13:37.699312  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
	I0920 18:13:37.699441  749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
	I0920 18:13:37.699604  749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
	I0920 18:13:38.819493  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.922986564s)
	I0920 18:13:38.819541  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.407583803s)
	I0920 18:13:38.819575  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819591  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819607  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819648  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.220925429s)
	I0920 18:13:38.819598  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819686  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819705  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819778  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.994650356s)
	W0920 18:13:38.819815  749135 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:13:38.819840  749135 retry.go:31] will retry after 365.705658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:13:38.819845  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.947942371s)
	I0920 18:13:38.819873  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.819885  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.819961  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.726965652s)
	I0920 18:13:38.820001  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820012  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820227  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820244  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820285  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820295  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820413  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.820433  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.820460  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820467  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820475  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820481  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820629  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820639  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820647  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820655  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.820718  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.820773  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.820781  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.820789  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.820795  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.821299  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.821316  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.821349  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.821355  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.821365  749135 addons.go:475] Verifying addon registry=true in "addons-446299"
	I0920 18:13:38.821906  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.821917  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.821926  749135 addons.go:475] Verifying addon ingress=true in "addons-446299"
	I0920 18:13:38.821997  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822026  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.822038  749135 addons.go:475] Verifying addon metrics-server=true in "addons-446299"
	I0920 18:13:38.822070  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822084  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.822092  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:38.822100  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:38.822128  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822143  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.822495  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:38.822542  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:38.822551  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:38.823406  749135 out.go:177] * Verifying ingress addon...
	I0920 18:13:38.823868  749135 out.go:177] * Verifying registry addon...
	I0920 18:13:38.824871  749135 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-446299 service yakd-dashboard -n yakd-dashboard
	
	I0920 18:13:38.825597  749135 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 18:13:38.826680  749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 18:13:38.844205  749135 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 18:13:38.844236  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:38.850356  749135 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:13:38.850383  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:39.186375  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:13:39.200878  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:39.330411  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:39.330769  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:39.849376  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:39.851690  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:40.361850  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:40.362230  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:41.034778  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:41.035000  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:41.038162  749135 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.342687523s)
	I0920 18:13:41.038403  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.280132041s)
	I0920 18:13:41.038461  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.038481  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.038819  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.038884  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.038905  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.038922  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.039163  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.039205  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.039225  749135 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-446299"
	I0920 18:13:41.039205  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:41.041287  749135 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 18:13:41.041290  749135 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:13:41.043438  749135 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 18:13:41.044297  749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 18:13:41.044713  749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 18:13:41.044732  749135 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 18:13:41.101841  749135 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:13:41.101863  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:41.130328  749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 18:13:41.130361  749135 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 18:13:41.246926  749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:13:41.246950  749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 18:13:41.330722  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:41.331217  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:41.367190  749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:13:41.375612  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.189187999s)
	I0920 18:13:41.375679  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.375703  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.376082  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.376123  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.376131  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:41.376140  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:41.376180  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:41.376437  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:41.376461  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:41.376464  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:41.548363  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:41.701651  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:41.831758  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:41.831933  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:42.053967  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:42.331450  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:42.331860  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:42.559368  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:42.796101  749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.428861154s)
	I0920 18:13:42.796164  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:42.796186  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:42.796539  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:42.796652  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:42.796628  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:42.796665  749135 main.go:141] libmachine: Making call to close driver server
	I0920 18:13:42.796674  749135 main.go:141] libmachine: (addons-446299) Calling .Close
	I0920 18:13:42.796931  749135 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:13:42.796948  749135 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:13:42.796971  749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
	I0920 18:13:42.798018  749135 addons.go:475] Verifying addon gcp-auth=true in "addons-446299"
	I0920 18:13:42.799750  749135 out.go:177] * Verifying gcp-auth addon...
	I0920 18:13:42.801961  749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 18:13:42.813536  749135 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 18:13:42.813557  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:42.834100  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:42.834512  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:43.050004  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:43.305311  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:43.330407  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:43.331586  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:43.549945  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:43.702111  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:43.806287  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:43.830332  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:43.830560  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:44.050313  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:44.307181  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:44.332062  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:44.332579  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:44.549621  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:44.806074  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:44.830087  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:44.830821  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:45.049798  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:45.305355  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:45.329798  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:45.330472  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:45.549159  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:45.702368  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:45.805600  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:45.830331  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:45.831003  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:46.048681  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:46.476235  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:46.476881  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:46.477765  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:46.576766  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:46.805777  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:46.830583  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:46.831463  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:47.050496  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:47.307091  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:47.330512  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:47.331048  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:47.549305  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:47.805735  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:47.830215  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:47.831512  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:48.049902  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:48.202178  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:48.306243  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:48.329718  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:48.332280  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:48.550170  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:48.805429  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:48.829830  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:48.831490  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:49.050407  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:49.305950  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:49.331188  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:49.331284  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:49.549193  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:49.805377  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:49.831064  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:49.831335  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:50.050205  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:50.205469  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:50.306610  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:50.330226  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:50.331728  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:50.548853  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:50.806045  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:50.830924  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:50.831062  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:51.049036  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:51.305994  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:51.330295  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:51.330905  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:51.549433  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:51.805870  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:51.830479  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:51.831665  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:52.050500  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:52.305644  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:52.330460  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:52.330909  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:52.549056  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:52.700600  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:52.805458  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:52.829967  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:52.831274  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:53.049224  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:53.306145  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:53.330699  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:53.331032  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:53.548388  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:54.211235  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:54.211371  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:54.211581  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:54.212019  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:54.305931  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:54.332757  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:54.333316  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:54.550241  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:54.701439  749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
	I0920 18:13:54.805276  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:54.830616  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:54.831417  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:55.057083  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:55.305836  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:55.330687  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:55.331243  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:55.550673  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:55.701690  749135 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"True"
	I0920 18:13:55.701725  749135 pod_ready.go:82] duration metric: took 18.50651845s for pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace to be "Ready" ...
	I0920 18:13:55.701734  749135 pod_ready.go:39] duration metric: took 22.723049339s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:13:55.701754  749135 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:13:55.701817  749135 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:13:55.736899  749135 api_server.go:72] duration metric: took 25.619420852s to wait for apiserver process to appear ...
	I0920 18:13:55.736929  749135 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:13:55.736952  749135 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0920 18:13:55.741901  749135 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0920 18:13:55.743609  749135 api_server.go:141] control plane version: v1.31.1
	I0920 18:13:55.743635  749135 api_server.go:131] duration metric: took 6.69997ms to wait for apiserver health ...
	I0920 18:13:55.743646  749135 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:13:55.757231  749135 system_pods.go:59] 17 kube-system pods found
	I0920 18:13:55.757585  749135 system_pods.go:61] "coredns-7c65d6cfc9-8b5fx" [226fc466-f0b5-4501-8879-b8b9b8d758ac] Running
	I0920 18:13:55.757615  749135 system_pods.go:61] "csi-hostpath-attacher-0" [b131974d-0f4b-4bc6-bec3-d4c797279aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 18:13:55.757633  749135 system_pods.go:61] "csi-hostpath-resizer-0" [684355d7-d68e-4357-8103-d8350a38ea37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 18:13:55.757647  749135 system_pods.go:61] "csi-hostpathplugin-fcmx5" [1576357c-2e2c-469a-b069-dcac225f49c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 18:13:55.757654  749135 system_pods.go:61] "etcd-addons-446299" [c82607ca-b677-4592-935a-a32dad76e79c] Running
	I0920 18:13:55.757662  749135 system_pods.go:61] "kube-apiserver-addons-446299" [93375989-de9f-4fea-afcc-44d35775ddd6] Running
	I0920 18:13:55.757668  749135 system_pods.go:61] "kube-controller-manager-addons-446299" [4c06855c-f18c-4df4-bd04-584c8594a744] Running
	I0920 18:13:55.757677  749135 system_pods.go:61] "kube-ingress-dns-minikube" [631849c1-f984-4e83-b07b-6b2ed4eb0697] Running
	I0920 18:13:55.757682  749135 system_pods.go:61] "kube-proxy-9pcgb" [934faade-c115-4ced-9bb6-c22a2fe014f2] Running
	I0920 18:13:55.757689  749135 system_pods.go:61] "kube-scheduler-addons-446299" [ce4ce9a3-dd64-47ed-a920-b6c5359c80a7] Running
	I0920 18:13:55.757697  749135 system_pods.go:61] "metrics-server-84c5f94fbc-dgfgh" [84513540-b090-4d24-b6e0-9ed764434018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:13:55.757705  749135 system_pods.go:61] "nvidia-device-plugin-daemonset-6l2l2" [c6db8268-e330-413b-9107-88c63f861e42] Running
	I0920 18:13:55.757714  749135 system_pods.go:61] "registry-66c9cd494c-vxc6t" [10b4cecb-c85b-45ef-8043-e88a81971d51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 18:13:55.757725  749135 system_pods.go:61] "registry-proxy-bqdmf" [11ab987d-a80f-412a-8a15-03a5898a2e9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 18:13:55.757738  749135 system_pods.go:61] "snapshot-controller-56fcc65765-4qwlb" [d4cd83fc-a074-4317-9b02-22010ae0ca66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.757750  749135 system_pods.go:61] "snapshot-controller-56fcc65765-8rk95" [63d1f200-a587-488c-82d3-bf38586a6fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.757759  749135 system_pods.go:61] "storage-provisioner" [0e9e378d-208e-46e0-a2be-70f96e59408a] Running
	I0920 18:13:55.757770  749135 system_pods.go:74] duration metric: took 14.117036ms to wait for pod list to return data ...
	I0920 18:13:55.757782  749135 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:13:55.762579  749135 default_sa.go:45] found service account: "default"
	I0920 18:13:55.762610  749135 default_sa.go:55] duration metric: took 4.817698ms for default service account to be created ...
	I0920 18:13:55.762622  749135 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:13:55.772780  749135 system_pods.go:86] 17 kube-system pods found
	I0920 18:13:55.772808  749135 system_pods.go:89] "coredns-7c65d6cfc9-8b5fx" [226fc466-f0b5-4501-8879-b8b9b8d758ac] Running
	I0920 18:13:55.772816  749135 system_pods.go:89] "csi-hostpath-attacher-0" [b131974d-0f4b-4bc6-bec3-d4c797279aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 18:13:55.772822  749135 system_pods.go:89] "csi-hostpath-resizer-0" [684355d7-d68e-4357-8103-d8350a38ea37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 18:13:55.772830  749135 system_pods.go:89] "csi-hostpathplugin-fcmx5" [1576357c-2e2c-469a-b069-dcac225f49c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 18:13:55.772834  749135 system_pods.go:89] "etcd-addons-446299" [c82607ca-b677-4592-935a-a32dad76e79c] Running
	I0920 18:13:55.772839  749135 system_pods.go:89] "kube-apiserver-addons-446299" [93375989-de9f-4fea-afcc-44d35775ddd6] Running
	I0920 18:13:55.772842  749135 system_pods.go:89] "kube-controller-manager-addons-446299" [4c06855c-f18c-4df4-bd04-584c8594a744] Running
	I0920 18:13:55.772847  749135 system_pods.go:89] "kube-ingress-dns-minikube" [631849c1-f984-4e83-b07b-6b2ed4eb0697] Running
	I0920 18:13:55.772851  749135 system_pods.go:89] "kube-proxy-9pcgb" [934faade-c115-4ced-9bb6-c22a2fe014f2] Running
	I0920 18:13:55.772856  749135 system_pods.go:89] "kube-scheduler-addons-446299" [ce4ce9a3-dd64-47ed-a920-b6c5359c80a7] Running
	I0920 18:13:55.772865  749135 system_pods.go:89] "metrics-server-84c5f94fbc-dgfgh" [84513540-b090-4d24-b6e0-9ed764434018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:13:55.772922  749135 system_pods.go:89] "nvidia-device-plugin-daemonset-6l2l2" [c6db8268-e330-413b-9107-88c63f861e42] Running
	I0920 18:13:55.772931  749135 system_pods.go:89] "registry-66c9cd494c-vxc6t" [10b4cecb-c85b-45ef-8043-e88a81971d51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 18:13:55.772936  749135 system_pods.go:89] "registry-proxy-bqdmf" [11ab987d-a80f-412a-8a15-03a5898a2e9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 18:13:55.772946  749135 system_pods.go:89] "snapshot-controller-56fcc65765-4qwlb" [d4cd83fc-a074-4317-9b02-22010ae0ca66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.772953  749135 system_pods.go:89] "snapshot-controller-56fcc65765-8rk95" [63d1f200-a587-488c-82d3-bf38586a6fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:13:55.772957  749135 system_pods.go:89] "storage-provisioner" [0e9e378d-208e-46e0-a2be-70f96e59408a] Running
	I0920 18:13:55.772963  749135 system_pods.go:126] duration metric: took 10.336403ms to wait for k8s-apps to be running ...
	I0920 18:13:55.772972  749135 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:13:55.773018  749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:13:55.793348  749135 system_svc.go:56] duration metric: took 20.361414ms WaitForService to wait for kubelet
	I0920 18:13:55.793389  749135 kubeadm.go:582] duration metric: took 25.675912921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:13:55.793417  749135 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:13:55.802544  749135 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:13:55.802600  749135 node_conditions.go:123] node cpu capacity is 2
	I0920 18:13:55.802617  749135 node_conditions.go:105] duration metric: took 9.193115ms to run NodePressure ...
	I0920 18:13:55.802639  749135 start.go:241] waiting for startup goroutines ...
	I0920 18:13:55.807268  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:55.834016  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:55.834628  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:56.049150  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:56.305873  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:56.331424  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:56.331798  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:56.550328  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:56.806065  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:56.829659  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:56.830161  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:57.049081  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:57.306075  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:57.329355  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:57.330540  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:57.549591  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:57.805900  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:57.830374  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:57.832330  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:58.049092  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:58.306271  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:58.329770  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:58.331160  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:58.922331  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:58.923063  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:58.923163  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:58.924173  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:59.050995  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:59.306609  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:59.410277  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:13:59.410618  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:59.549349  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:13:59.806119  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:59.829906  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:13:59.830124  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:00.049161  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:00.306487  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:00.330117  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:00.331103  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:00.549561  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:00.806760  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:00.831148  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:00.831297  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:01.050001  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:01.306298  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:01.407860  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:01.408083  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:01.548728  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:01.806320  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:01.830021  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:01.830689  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:02.048991  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:02.305521  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:02.330400  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:02.331175  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:02.549048  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:02.805598  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:02.830127  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:02.830327  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:03.049629  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:03.305858  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:03.331322  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:03.331679  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:03.548558  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:03.820166  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:03.830589  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:03.832021  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:04.465452  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:04.465905  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:04.465965  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:04.466066  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:04.565162  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:04.805221  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:04.830427  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:04.830573  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:05.050021  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:05.305449  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:05.330307  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:05.331288  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:05.549216  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:05.805952  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:05.830822  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:05.830882  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:06.048888  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:06.305947  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:06.330556  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:06.330915  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:06.549018  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:06.806964  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:06.841818  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:06.843261  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:07.048576  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:07.305982  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:07.330357  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:07.330437  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:07.549676  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:07.813909  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:07.830340  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:07.830795  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:08.050020  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:08.306364  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:08.330678  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:08.332935  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:08.548619  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:08.805004  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:08.830441  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:08.831560  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:09.332291  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:09.333139  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:09.333782  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:09.335034  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:09.549087  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:09.805906  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:09.829949  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:09.830348  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:10.049303  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:10.306098  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:10.329817  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:10.330883  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:10.549227  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:10.951479  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:10.951670  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:10.951904  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:11.048505  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:11.306899  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:11.330827  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:11.331176  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:11.549848  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:11.805719  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:11.830262  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:11.830606  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:12.059649  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:12.305971  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:12.329961  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:12.330563  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:12.549966  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:12.804939  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:12.829214  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:12.830837  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:13.048395  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:13.305641  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:13.331438  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:13.331605  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:13.549421  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:13.805919  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:13.831661  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:13.831730  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:14.049399  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:14.306300  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:14.329818  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:14.330774  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:14.552222  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:14.806365  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:14.829698  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:14.831887  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:15.048953  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:15.305618  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:15.330650  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:15.330943  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:15.548777  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:15.806132  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:15.830944  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:14:15.831352  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:16.052172  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:16.306342  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:16.329653  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:16.330883  749135 kapi.go:107] duration metric: took 37.504199599s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 18:14:16.548598  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:16.805754  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:16.830184  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:17.049843  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:17.383048  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:17.383735  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:17.550278  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:17.806058  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:17.829341  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:18.051596  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:18.306388  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:18.334664  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:18.552534  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:18.806897  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:18.830308  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:19.050045  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:19.306131  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:19.329862  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:19.550696  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:19.807045  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:19.829977  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:20.048666  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:20.306256  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:20.329911  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:20.550226  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:20.806144  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:20.830855  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:21.049583  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:21.310640  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:21.412808  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:21.549653  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:21.805953  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:21.829404  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:22.049850  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:22.315829  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:22.331862  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:22.549120  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:22.806085  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:22.829986  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:23.049654  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:23.306266  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:23.330058  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:23.560251  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:23.807013  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:23.830715  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:24.049404  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:24.306201  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:24.330512  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:24.595031  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:24.806293  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:24.907159  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:25.048965  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:25.305513  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:25.331059  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:25.549920  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:25.805287  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:25.830246  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:26.048992  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:26.306656  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:26.329987  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:26.549698  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:26.808992  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:26.829741  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:27.052649  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:27.312773  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:27.331951  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:27.562526  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:27.805604  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:27.830050  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:28.067172  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:28.306333  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:28.330924  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:28.550567  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:28.807713  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:28.836265  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:29.049440  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:29.305994  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:29.329628  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:29.551265  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:29.807081  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:29.829169  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:30.051607  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:30.308200  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:30.331298  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:30.553108  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:30.822844  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:30.831353  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:31.049853  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:31.305139  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:31.329419  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:31.549350  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:31.806142  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:31.829483  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:32.053013  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:32.306129  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:32.330537  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:32.771680  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:32.806908  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:32.831303  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:33.050163  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:33.305068  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:33.330437  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:33.548440  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:33.806177  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:33.830995  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:34.049496  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:14:34.310365  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:34.329994  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:34.548907  749135 kapi.go:107] duration metric: took 53.50460724s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 18:14:34.805871  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:34.830222  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:35.306762  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:35.330726  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:35.806453  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:35.830187  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:36.305548  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:36.330510  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:36.806443  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:36.829844  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:37.306287  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:37.330018  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:37.806187  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:37.829944  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:38.306428  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:38.330700  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:38.806275  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:38.830764  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:39.305577  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:39.330471  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:39.806014  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:39.829683  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:40.306572  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:40.329962  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:40.806663  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:40.830402  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:41.305985  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:41.329856  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:41.807066  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:41.829842  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:42.305779  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:42.330575  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:42.805256  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:42.829665  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:43.305345  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:43.329924  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:43.805970  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:43.829619  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:44.305067  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:44.330110  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:44.807165  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:44.832428  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:45.307073  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:45.329430  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:45.807239  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:45.829759  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:46.305795  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:46.330660  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:46.807307  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:46.829950  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:47.306710  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:47.330054  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:47.806495  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:47.830576  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:48.305615  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:48.330601  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:48.805326  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:48.829994  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:49.306221  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:49.330067  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:49.807517  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:49.831847  749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:14:50.312486  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:50.412022  749135 kapi.go:107] duration metric: took 1m11.586419635s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 18:14:50.805525  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:51.306784  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:51.919819  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:52.306451  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:52.809242  749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:14:53.318752  749135 kapi.go:107] duration metric: took 1m10.516788064s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 18:14:53.320395  749135 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-446299 cluster.
	I0920 18:14:53.321854  749135 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 18:14:53.323252  749135 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 18:14:53.324985  749135 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 18:14:53.326283  749135 addons.go:510] duration metric: took 1m23.208765269s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 18:14:53.326342  749135 start.go:246] waiting for cluster config update ...
	I0920 18:14:53.326365  749135 start.go:255] writing updated cluster config ...
	I0920 18:14:53.326710  749135 ssh_runner.go:195] Run: rm -f paused
	I0920 18:14:53.387365  749135 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:14:53.389186  749135 out.go:177] * Done! kubectl is now configured to use "addons-446299" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.165314637Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9abaa60d-786a-4b51-a3ac-9096c481c48b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.166894809Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65c2d85e-8723-4202-99c3-c3bc3b816607 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.167212405Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6ec3b90eaa707443e69e248c6c52982afc7f57bd09088900563b0c20b9194d40,Metadata:&PodSandboxMetadata{Name:task-pv-pod-restore,Uid:c0105316-5ff3-4ccd-8862-0a9a1965982f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856618202562262,Labels:map[string]string{app: task-pv-pod-restore,io.kubernetes.container.name: POD,io.kubernetes.pod.name: task-pv-pod-restore,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0105316-5ff3-4ccd-8862-0a9a1965982f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:23:37.882062860Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6e3463aaaeb08e7928d0478802b7e14926a6d1868d836d758bdd93915f3ab3d,Metadata:&PodSandboxMetadata{Name:nginx,Uid:e00699c2-7689-43aa-9a79-f6b8682fbe91,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856609961670258,L
abels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e00699c2-7689-43aa-9a79-f6b8682fbe91,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:23:29.633310130Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a86b7e0dc8d7346f41e32ecd9161a50423f5b245cd57df1481d28b0ac6aac3b7,Metadata:&PodSandboxMetadata{Name:busybox,Uid:785bf044-a4fc-4f3b-aa48-f0c32d84c0cb,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856096147596972,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 785bf044-a4fc-4f3b-aa48-f0c32d84c0cb,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:14:55.835490945Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metad
ata:&PodSandboxMetadata{Name:gcp-auth-89d5ffd79-9scf7,Uid:e1fe9053-9c74-44c1-b9eb-33e656a4810b,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856088030410599,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 89d5ffd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:13:42.680776042Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-bc57996ff-8kt58,Uid:91004bb0-5831-431e-8777-5e8e4b5296bc,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856082839815659,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ing
ress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e8e4b5296bc,pod-template-hash: bc57996ff,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:13:38.628847599Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00b4d98c297796e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&PodSandboxMetadata{Name:csi-hostpath-resizer-0,Uid:684355d7-d68e-4357-8103-d8350a38ea37,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856021684356554,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-resizer,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-resizer-dd9fcd54,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6843
55d7-d68e-4357-8103-d8350a38ea37,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-resizer-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:13:41.050114763Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&PodSandboxMetadata{Name:csi-hostpath-attacher-0,Uid:b131974d-0f4b-4bc6-bec3-d4c797279aa4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856021210075625,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-attacher,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-attacher-7784d6d6ff,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name:
csi-hostpath-attacher-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:13:40.298962857Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&PodSandboxMetadata{Name:csi-hostpathplugin-fcmx5,Uid:1576357c-2e2c-469a-b069-dcac225f49c4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856021099964976,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/component: plugin,app.kubernetes.io/instance: hostpath.csi.k8s.io,app.kubernetes.io/name: csi-hostpathplugin,app.kubernetes.io/part-of: csi-driver-host-path,controller-revision-hash: 69fcd644d8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,kubernetes.io/minikube-addons: csi-hostpath-driver,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/confi
g.seen: 2024-09-20T18:13:40.455084561Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&PodSandboxMetadata{Name:snapshot-controller-56fcc65765-4qwlb,Uid:d4cd83fc-a074-4317-9b02-22010ae0ca66,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856020457224097,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,pod-template-hash: 56fcc65765,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:13:38.184064498Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&PodSandboxMetadata{Name:snapshot-controller-56fcc65765-8rk95,Uid:63d1f200-a587-488c-82d3-bf38586a6fd0,Namespace:kube-system,Attempt:0,},State:SANDB
OX_READY,CreatedAt:1726856018751807063,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,pod-template-hash: 56fcc65765,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:13:38.239069695Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-86d989889c-tvbgx,Uid:b4d58283-346f-437d-adfb-34215341023e,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856016061230450,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,pod-template-hash: 86
d989889c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:13:35.427893945Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0e9e378d-208e-46e0-a2be-70f96e59408a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856015557835131,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\
"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T18:13:34.907364776Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:631849c1-f984-4e83-b07b-6b2ed4eb0697,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856014009849470,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-09-20T18:13:33.677247951Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1
e4bcabce0730c45dba99a5b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-8b5fx,Uid:226fc466-f0b5-4501-8879-b8b9b8d758ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856010970292864,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:13:30.659427641Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&PodSandboxMetadata{Name:kube-proxy-9pcgb,Uid:934faade-c115-4ced-9bb6-c22a2fe014f2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726856010781645032,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:13:30.465310043Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:403b403cdf21825fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-446299,Uid:da0809c41e3f89be51ba1d85d92334c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726855999991211347,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.237:8443,kubernetes.io/config.hash: da0809c41e3f89be51ba1d85d92334c0,kubernetes.io/config.seen: 2024-09-20T18:13:19.518541401Z,ku
bernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-446299,Uid:3f419eac436c5a6f133bb67c6a198274,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726855999983056362,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3f419eac436c5a6f133bb67c6a198274,kubernetes.io/config.seen: 2024-09-20T18:13:19.518543462Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-446299,Uid:37c1dc236d6aa092754be85db9af15d9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,Create
dAt:1726855999969330949,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 37c1dc236d6aa092754be85db9af15d9,kubernetes.io/config.seen: 2024-09-20T18:13:19.518542642Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:17de22cbd91b4d025017f1149b32f2168ea0cac728b75d80f78ab208ff3de7aa,Metadata:&PodSandboxMetadata{Name:etcd-addons-446299,Uid:86ddc6bc2cc035d3de8f8c47a04894ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726855999965899875,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,tier: control-plane,},Annotations:map[string]string{ku
beadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.237:2379,kubernetes.io/config.hash: 86ddc6bc2cc035d3de8f8c47a04894ae,kubernetes.io/config.seen: 2024-09-20T18:13:19.518538167Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=65c2d85e-8723-4202-99c3-c3bc3b816607 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.167303861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063
eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandboxId:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:C
ONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf21825fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,
CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f2168ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17268560002331561
33,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9abaa60d-786a-4b51-a3ac-9096c481c48b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.168761794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ddebfdc-854b-4b7a-8a99-5591896db071 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.168812122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ddebfdc-854b-4b7a-8a99-5591896db071 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.169173780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856979169153033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63c6fc17-9dca-4a22-ab8c-61778f69c080 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.169558508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.co
ntainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},Annotations:map[
string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 631849c1-f9
84-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provis
ioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0
a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandboxId:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNIN
G,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf21825fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:17268
56000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f2168ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856000233156133,Labels:map[s
tring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ddebfdc-854b-4b7a-8a99-5591896db071 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.169888212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47162f36-a04c-4599-b2e6-357a46ccda86 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.170264374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47162f36-a04c-4599-b2e6-357a46ccda86 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.170631430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063
eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandboxId:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:C
ONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf21825fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,
CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f2168ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17268560002331561
33,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47162f36-a04c-4599-b2e6-357a46ccda86 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.215430868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59023de1-dadd-4bc1-a338-a31eedbfd3ab name=/runtime.v1.RuntimeService/Version
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.215524503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59023de1-dadd-4bc1-a338-a31eedbfd3ab name=/runtime.v1.RuntimeService/Version
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.220149656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2903284b-1c3b-42ec-b6d1-ba9949af061c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.221217438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856979221192147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2903284b-1c3b-42ec-b6d1-ba9949af061c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.221936298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc243388-1a17-4573-9ec3-fe6d34289ebf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.221996373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc243388-1a17-4573-9ec3-fe6d34289ebf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.222419228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063
eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandboxId:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:C
ONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf21825fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,
CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f2168ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17268560002331561
33,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc243388-1a17-4573-9ec3-fe6d34289ebf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.262486985Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8a53239-c153-4898-8aec-1656913c65a3 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.262558842Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8a53239-c153-4898-8aec-1656913c65a3 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.264019626Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6eb49016-941e-4475-b2c9-397000af7bf8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.265054712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856979265031201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6eb49016-941e-4475-b2c9-397000af7bf8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.265880368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a37683f-63c2-4a0b-a063-8bceba28e1f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.265938214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a37683f-63c2-4a0b-a063-8bceba28e1f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:29:39 addons-446299 crio[659]: time="2024-09-20 18:29:39.266408606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},A
nnotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063
eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandboxId:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:C
ONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf21825fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,
CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f2168ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17268560002331561
33,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a37683f-63c2-4a0b-a063-8bceba28e1f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	7c4b9c3a7c539       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 14 minutes ago      Running             gcp-auth                                 0                   efe0ec0dcbcc2       gcp-auth-89d5ffd79-9scf7
	ba7dc5faa58b7       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             14 minutes ago      Running             controller                               0                   75840320e5280       ingress-nginx-controller-bc57996ff-8kt58
	b094e7c30c796       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          15 minutes ago      Running             csi-snapshotter                          0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	bed98529d363a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 minutes ago      Running             csi-provisioner                          0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	69da68d150b2a       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            15 minutes ago      Running             liveness-probe                           0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	fd9ca7a3ca987       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           15 minutes ago      Running             hostpath                                 0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	5a2b6759c0bf9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                15 minutes ago      Running             node-driver-registrar                    0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	66723f0443fe2       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              15 minutes ago      Running             csi-resizer                              0                   00b4d98c29779       csi-hostpath-resizer-0
	c917700eb7747       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             15 minutes ago      Running             csi-attacher                             0                   3ffd6a03ee490       csi-hostpath-attacher-0
	509b6bbf231a9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   15 minutes ago      Running             csi-external-health-monitor-controller   0                   eccc7c4b1b4ce       csi-hostpathplugin-fcmx5
	e86a2c89e146b       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                                             15 minutes ago      Exited              patch                                    1                   a24f9a7c28487       ingress-nginx-admission-patch-2mwr8
	bf44e059a196a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   15 minutes ago      Exited              create                                   0                   1938162f16084       ingress-nginx-admission-create-sdwls
	33f5bce9e468f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      15 minutes ago      Running             volume-snapshot-controller               0                   46ab05da30745       snapshot-controller-56fcc65765-4qwlb
	cbf9321604592       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      15 minutes ago      Running             volume-snapshot-controller               0                   f64e4538489ab       snapshot-controller-56fcc65765-8rk95
	b425ff4f976af       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             15 minutes ago      Running             local-path-provisioner                   0                   a0bef6fd3ee4b       local-path-provisioner-86d989889c-tvbgx
	68195d8abd2e3       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             15 minutes ago      Running             minikube-ingress-dns                     0                   50aa8158427c9       kube-ingress-dns-minikube
	123e17c57dc2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             16 minutes ago      Running             storage-provisioner                      0                   2de8a3616c782       storage-provisioner
	d52dc29cba22a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             16 minutes ago      Running             coredns                                  0                   a7fdf4add17f8       coredns-7c65d6cfc9-8b5fx
	371fb9f89e965       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             16 minutes ago      Running             kube-proxy                               0                   5aa37b64d2a9c       kube-proxy-9pcgb
	730952f4127d6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             16 minutes ago      Running             kube-apiserver                           0                   403b403cdf218       kube-apiserver-addons-446299
	e9e7734f58847       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             16 minutes ago      Running             kube-scheduler                           0                   4306bc0f35baa       kube-scheduler-addons-446299
	a8af18aadd9a1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             16 minutes ago      Running             kube-controller-manager                  0                   859cc747f1c82       kube-controller-manager-addons-446299
	402ab000bdb93       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             16 minutes ago      Running             etcd                                     0                   17de22cbd91b4       etcd-addons-446299
	
	
	==> coredns [d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a] <==
	[INFO] 127.0.0.1:45092 - 31226 "HINFO IN 8537533385009167611.1098357581305743543. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017946303s
	[INFO] 10.244.0.7:50895 - 60070 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000864499s
	[INFO] 10.244.0.7:50895 - 30883 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.004754851s
	[INFO] 10.244.0.7:60479 - 45291 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000276551s
	[INFO] 10.244.0.7:60479 - 60648 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000259587s
	[INFO] 10.244.0.7:34337 - 50221 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103649s
	[INFO] 10.244.0.7:34337 - 3119 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000190818s
	[INFO] 10.244.0.7:50579 - 48699 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149541s
	[INFO] 10.244.0.7:50579 - 13882 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00029954s
	[INFO] 10.244.0.7:52674 - 19194 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100903s
	[INFO] 10.244.0.7:52674 - 48616 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131897s
	[INFO] 10.244.0.7:34842 - 24908 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052174s
	[INFO] 10.244.0.7:34842 - 17742 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131345s
	[INFO] 10.244.0.7:58542 - 36156 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047177s
	[INFO] 10.244.0.7:58542 - 62014 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148973s
	[INFO] 10.244.0.7:34082 - 14251 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000145316s
	[INFO] 10.244.0.7:34082 - 45485 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000238133s
	[INFO] 10.244.0.21:56997 - 31030 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000537673s
	[INFO] 10.244.0.21:35720 - 34441 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147988s
	[INFO] 10.244.0.21:53795 - 23425 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001554s
	[INFO] 10.244.0.21:58869 - 385 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122258s
	[INFO] 10.244.0.21:37326 - 35127 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00023415s
	[INFO] 10.244.0.21:35448 - 47752 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126595s
	[INFO] 10.244.0.21:41454 - 25870 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003639103s
	[INFO] 10.244.0.21:51708 - 51164 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00402176s
	
	
	==> describe nodes <==
	Name:               addons-446299
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-446299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-446299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_13_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-446299
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-446299"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:13:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-446299
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:29:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:28:34 +0000   Fri, 20 Sep 2024 18:13:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:28:34 +0000   Fri, 20 Sep 2024 18:13:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:28:34 +0000   Fri, 20 Sep 2024 18:13:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:28:34 +0000   Fri, 20 Sep 2024 18:13:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    addons-446299
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b51819720d24a4988f4faf5cbed4e8f
	  System UUID:                6b518197-20d2-4a49-88f4-faf5cbed4e8f
	  Boot ID:                    431228fc-f5a8-4282-bf7e-10c36798659f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gcp-auth                    gcp-auth-89d5ffd79-9scf7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8kt58    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-8b5fx                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     16m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 csi-hostpathplugin-fcmx5                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-addons-446299                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-446299                250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-446299       200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-9pcgb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-446299                100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 snapshot-controller-56fcc65765-4qwlb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 snapshot-controller-56fcc65765-8rk95        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  local-path-storage          local-path-provisioner-86d989889c-tvbgx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node addons-446299 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node addons-446299 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node addons-446299 status is now: NodeHasSufficientPID
	  Normal  NodeReady                16m   kubelet          Node addons-446299 status is now: NodeReady
	  Normal  RegisteredNode           16m   node-controller  Node addons-446299 event: Registered Node addons-446299 in Controller
	
	
	==> dmesg <==
	[  +5.305303] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
	[  +0.141616] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.046436] kauditd_printk_skb: 135 callbacks suppressed
	[  +5.120665] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.997269] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.458196] kauditd_printk_skb: 5 callbacks suppressed
	[Sep20 18:14] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.706525] kauditd_printk_skb: 34 callbacks suppressed
	[ +16.244583] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.135040] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.940354] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.767745] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.007018] kauditd_printk_skb: 48 callbacks suppressed
	[Sep20 18:15] kauditd_printk_skb: 10 callbacks suppressed
	[Sep20 18:16] kauditd_printk_skb: 30 callbacks suppressed
	[Sep20 18:17] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:20] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:22] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:23] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.877503] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.382620] kauditd_printk_skb: 41 callbacks suppressed
	[  +8.681981] kauditd_printk_skb: 39 callbacks suppressed
	[ +13.570039] kauditd_printk_skb: 14 callbacks suppressed
	[Sep20 18:24] kauditd_printk_skb: 2 callbacks suppressed
	[ +30.180557] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551] <==
	{"level":"warn","ts":"2024-09-20T18:14:32.753338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.730876ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:32.753372Z","caller":"traceutil/trace.go:171","msg":"trace[1542998802] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1058; }","duration":"340.769961ms","start":"2024-09-20T18:14:32.412597Z","end":"2024-09-20T18:14:32.753367Z","steps":["trace[1542998802] 'agreement among raft nodes before linearized reading'  (duration: 340.724283ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:32.753846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.265355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:32.753903Z","caller":"traceutil/trace.go:171","msg":"trace[581069886] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1058; }","duration":"217.327931ms","start":"2024-09-20T18:14:32.536567Z","end":"2024-09-20T18:14:32.753895Z","steps":["trace[581069886] 'agreement among raft nodes before linearized reading'  (duration: 217.246138ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:51.903628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.538818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-20T18:14:51.904065Z","caller":"traceutil/trace.go:171","msg":"trace[2043860769] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1117; }","duration":"144.082045ms","start":"2024-09-20T18:14:51.759954Z","end":"2024-09-20T18:14:51.904036Z","steps":["trace[2043860769] 'count revisions from in-memory index tree'  (duration: 143.478073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:14:51.904831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.923374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:14:51.904891Z","caller":"traceutil/trace.go:171","msg":"trace[386261722] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"111.005288ms","start":"2024-09-20T18:14:51.793876Z","end":"2024-09-20T18:14:51.904881Z","steps":["trace[386261722] 'range keys from in-memory index tree'  (duration: 110.882796ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:23:04.403949Z","caller":"traceutil/trace.go:171","msg":"trace[1232773900] linearizableReadLoop","detail":"{readStateIndex:2064; appliedIndex:2063; }","duration":"137.955638ms","start":"2024-09-20T18:23:04.265959Z","end":"2024-09-20T18:23:04.403914Z","steps":["trace[1232773900] 'read index received'  (duration: 137.83631ms)","trace[1232773900] 'applied index is now lower than readState.Index'  (duration: 118.922µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:23:04.404190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.160514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:23:04.404218Z","caller":"traceutil/trace.go:171","msg":"trace[1586547199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1925; }","duration":"138.254725ms","start":"2024-09-20T18:23:04.265955Z","end":"2024-09-20T18:23:04.404210Z","steps":["trace[1586547199] 'agreement among raft nodes before linearized reading'  (duration: 138.105756ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:23:04.404422Z","caller":"traceutil/trace.go:171","msg":"trace[700372140] transaction","detail":"{read_only:false; response_revision:1925; number_of_response:1; }","duration":"379.764994ms","start":"2024-09-20T18:23:04.024645Z","end":"2024-09-20T18:23:04.404410Z","steps":["trace[700372140] 'process raft request'  (duration: 379.19458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:23:04.404517Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:23:04.024622Z","time spent":"379.814521ms","remote":"127.0.0.1:36928","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1924 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-20T18:23:21.256394Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1506}
	{"level":"info","ts":"2024-09-20T18:23:21.288238Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1506,"took":"31.314726ms","hash":517065302,"current-db-size-bytes":7016448,"current-db-size":"7.0 MB","current-db-size-in-use-bytes":4055040,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2024-09-20T18:23:21.288299Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":517065302,"revision":1506,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T18:23:22.430993Z","caller":"traceutil/trace.go:171","msg":"trace[200479020] transaction","detail":"{read_only:false; response_revision:2108; number_of_response:1; }","duration":"314.888557ms","start":"2024-09-20T18:23:22.116093Z","end":"2024-09-20T18:23:22.430981Z","steps":["trace[200479020] 'process raft request'  (duration: 314.552392ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:23:22.431107Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:23:22.116078Z","time spent":"314.951125ms","remote":"127.0.0.1:37058","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2038 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:425 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-09-20T18:23:23.865254Z","caller":"traceutil/trace.go:171","msg":"trace[102178879] linearizableReadLoop","detail":"{readStateIndex:2258; appliedIndex:2257; }","duration":"203.488059ms","start":"2024-09-20T18:23:23.661753Z","end":"2024-09-20T18:23:23.865241Z","steps":["trace[102178879] 'read index received'  (duration: 203.347953ms)","trace[102178879] 'applied index is now lower than readState.Index'  (duration: 139.623µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:23:23.865357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.585815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:23:23.865380Z","caller":"traceutil/trace.go:171","msg":"trace[1945616439] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2110; }","duration":"203.624964ms","start":"2024-09-20T18:23:23.661749Z","end":"2024-09-20T18:23:23.865374Z","steps":["trace[1945616439] 'agreement among raft nodes before linearized reading'  (duration: 203.546895ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:23:23.865639Z","caller":"traceutil/trace.go:171","msg":"trace[1429413700] transaction","detail":"{read_only:false; response_revision:2110; number_of_response:1; }","duration":"210.845365ms","start":"2024-09-20T18:23:23.654785Z","end":"2024-09-20T18:23:23.865631Z","steps":["trace[1429413700] 'process raft request'  (duration: 210.352466ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:28:21.262984Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2106}
	{"level":"info","ts":"2024-09-20T18:28:21.285870Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2106,"took":"22.302077ms","hash":3491567488,"current-db-size-bytes":7016448,"current-db-size":"7.0 MB","current-db-size-in-use-bytes":3915776,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-20T18:28:21.285936Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3491567488,"revision":2106,"compact-revision":1506}
	
	
	==> gcp-auth [7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228] <==
	2024/09/20 18:14:53 Ready to write response ...
	2024/09/20 18:14:55 Ready to marshal response ...
	2024/09/20 18:14:55 Ready to write response ...
	2024/09/20 18:14:55 Ready to marshal response ...
	2024/09/20 18:14:55 Ready to write response ...
	2024/09/20 18:22:59 Ready to marshal response ...
	2024/09/20 18:22:59 Ready to write response ...
	2024/09/20 18:22:59 Ready to marshal response ...
	2024/09/20 18:22:59 Ready to write response ...
	2024/09/20 18:22:59 Ready to marshal response ...
	2024/09/20 18:22:59 Ready to write response ...
	2024/09/20 18:23:05 Ready to marshal response ...
	2024/09/20 18:23:05 Ready to write response ...
	2024/09/20 18:23:05 Ready to marshal response ...
	2024/09/20 18:23:05 Ready to write response ...
	2024/09/20 18:23:10 Ready to marshal response ...
	2024/09/20 18:23:10 Ready to write response ...
	2024/09/20 18:23:15 Ready to marshal response ...
	2024/09/20 18:23:15 Ready to write response ...
	2024/09/20 18:23:18 Ready to marshal response ...
	2024/09/20 18:23:18 Ready to write response ...
	2024/09/20 18:23:29 Ready to marshal response ...
	2024/09/20 18:23:29 Ready to write response ...
	2024/09/20 18:23:37 Ready to marshal response ...
	2024/09/20 18:23:37 Ready to write response ...
	
	
	==> kernel <==
	 18:29:39 up 16 min,  0 users,  load average: 0.33, 0.25, 0.27
	Linux addons-446299 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c] <==
	 > logger="UnhandledError"
	W0920 18:15:27.823202       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:15:27.823313       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 18:15:27.823420       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:15:27.823588       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:15:27.824490       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:15:27.825326       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0920 18:15:31.828151       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.147.48:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.147.48:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	W0920 18:15:31.828390       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:15:31.828450       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:15:31.847786       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0920 18:15:31.853561       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0920 18:22:59.185908       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.29.221"}
	I0920 18:23:23.918494       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 18:23:25.009930       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 18:23:29.482103       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 18:23:29.675487       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.190.241"}
	I0920 18:23:30.728395       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 18:28:32.892900       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e] <==
	W0920 18:23:34.852782       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:23:34.852837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:23:45.509339       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:23:45.509390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:24:03.134228       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:24:03.134359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:24:11.155220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.255µs"
	W0920 18:24:29.364098       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:24:29.364246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:25:01.947190       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:25:01.947288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:25:33.105344       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:25:33.105500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:26:14.610422       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:26:14.610571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:27:08.759968       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:27:08.760083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:27:45.244240       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:27:45.244314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:28:08.201785       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="9.76µs"
	W0920 18:28:30.076776       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:28:30.076841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:28:34.422130       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-446299"
	W0920 18:29:28.507680       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:29:28.507846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:13:32.095684       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:13:32.111185       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.237"]
	E0920 18:13:32.111246       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:13:32.254832       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:13:32.254884       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:13:32.254908       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:13:32.262039       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:13:32.262450       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:13:32.262484       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:13:32.268397       1 config.go:199] "Starting service config controller"
	I0920 18:13:32.268443       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:13:32.268473       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:13:32.268477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:13:32.268988       1 config.go:328] "Starting node config controller"
	I0920 18:13:32.268994       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:13:32.368877       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:13:32.368886       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:13:32.369073       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072] <==
	W0920 18:13:22.809246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 18:13:22.809282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.809585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:13:22.809621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.813253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:13:22.813298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.813377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:13:22.813413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.813464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:13:22.813478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:22.815129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:13:22.815174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.637031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 18:13:23.637068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.746262       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:13:23.746361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.943434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:13:23.943536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.956043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:13:23.956129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:23.968884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:13:23.969017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:13:24.340405       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:13:24.340516       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 18:13:27.096843       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:28:55 addons-446299 kubelet[1199]: E0920 18:28:55.601609    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856935601025791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:28:55 addons-446299 kubelet[1199]: E0920 18:28:55.602001    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856935601025791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:28:56 addons-446299 kubelet[1199]: E0920 18:28:56.167493    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
	Sep 20 18:29:01 addons-446299 kubelet[1199]: E0920 18:29:01.167396    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="e00699c2-7689-43aa-9a79-f6b8682fbe91"
	Sep 20 18:29:05 addons-446299 kubelet[1199]: E0920 18:29:05.604934    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856945604330689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:29:05 addons-446299 kubelet[1199]: E0920 18:29:05.605295    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856945604330689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:29:07 addons-446299 kubelet[1199]: E0920 18:29:07.169914    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
	Sep 20 18:29:07 addons-446299 kubelet[1199]: E0920 18:29:07.170207    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="c0105316-5ff3-4ccd-8862-0a9a1965982f"
	Sep 20 18:29:13 addons-446299 kubelet[1199]: E0920 18:29:13.167470    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="e00699c2-7689-43aa-9a79-f6b8682fbe91"
	Sep 20 18:29:15 addons-446299 kubelet[1199]: E0920 18:29:15.608147    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856955607778787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:29:15 addons-446299 kubelet[1199]: E0920 18:29:15.608436    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856955607778787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:29:18 addons-446299 kubelet[1199]: E0920 18:29:18.166522    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
	Sep 20 18:29:21 addons-446299 kubelet[1199]: E0920 18:29:21.166605    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="c0105316-5ff3-4ccd-8862-0a9a1965982f"
	Sep 20 18:29:25 addons-446299 kubelet[1199]: E0920 18:29:25.208452    1199 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:29:25 addons-446299 kubelet[1199]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:29:25 addons-446299 kubelet[1199]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:29:25 addons-446299 kubelet[1199]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:29:25 addons-446299 kubelet[1199]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:29:25 addons-446299 kubelet[1199]: E0920 18:29:25.611436    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856965610946092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:29:25 addons-446299 kubelet[1199]: E0920 18:29:25.611468    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856965610946092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:29:27 addons-446299 kubelet[1199]: E0920 18:29:27.165973    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="e00699c2-7689-43aa-9a79-f6b8682fbe91"
	Sep 20 18:29:31 addons-446299 kubelet[1199]: E0920 18:29:31.168125    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
	Sep 20 18:29:32 addons-446299 kubelet[1199]: E0920 18:29:32.166305    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="c0105316-5ff3-4ccd-8862-0a9a1965982f"
	Sep 20 18:29:35 addons-446299 kubelet[1199]: E0920 18:29:35.614295    1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856975613943343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:29:35 addons-446299 kubelet[1199]: E0920 18:29:35.614572    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856975613943343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0] <==
	I0920 18:13:37.673799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:13:37.889195       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:13:37.889268       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:13:37.991169       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:13:37.991374       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8!
	I0920 18:13:37.992328       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e2a2b2a-26e5-43f5-ad91-442df4e21dfd", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8 became leader
	I0920 18:13:38.191750       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-446299 -n addons-446299
helpers_test.go:261: (dbg) Run:  kubectl --context addons-446299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-446299 describe pod busybox nginx task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-446299 describe pod busybox nginx task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8: exit status 1 (86.742453ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-446299/192.168.39.237
	Start Time:       Fri, 20 Sep 2024 18:14:55 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6l6f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s6l6f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/busybox to addons-446299
	  Normal   Pulling    13m (x4 over 14m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     13m (x4 over 14m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     13m (x4 over 14m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m36s (x42 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-446299/192.168.39.237
	Start Time:       Fri, 20 Sep 2024 18:23:29 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zg4g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8zg4g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m11s                  default-scheduler  Successfully assigned default/nginx to addons-446299
	  Warning  Failed     5m39s                  kubelet            Failed to pull image "docker.io/nginx:alpine": copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m6s (x2 over 4m38s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m16s (x4 over 6m10s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     91s (x4 over 5m39s)    kubelet            Error: ErrImagePull
	  Warning  Failed     91s                    kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    66s (x7 over 5m38s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     66s (x7 over 5m38s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-446299/192.168.39.237
	Start Time:       Fri, 20 Sep 2024 18:23:37 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zzgp9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-zzgp9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-446299
	  Warning  Failed     3m37s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    105s (x4 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     61s (x3 over 5m8s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     61s (x4 over 5m8s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    33s (x7 over 5m8s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     33s (x7 over 5m8s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sdwls" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2mwr8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-446299 describe pod busybox nginx task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8: exit status 1
--- FAIL: TestAddons/parallel/CSI (384.07s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (190.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [864d9275-483c-414e-841c-7b0f97612610] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015864986s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-023857 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-023857 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-023857 get pvc myclaim -o=json
I0920 18:36:29.602341  748497 retry.go:31] will retry after 2.189147773s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:bf4d0cb7-4337-43e3-8357-543af67d4579 ResourceVersion:690 Generation:0 CreationTimestamp:2024-09-20 18:36:29 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0019a85d0 VolumeMode:0xc0019a85e0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-023857 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-023857 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2af898fd-7b04-41ed-8c9e-0651f29c22bc] Pending
helpers_test.go:344: "sp-pod" [2af898fd-7b04-41ed-8c9e-0651f29c22bc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-023857 -n functional-023857
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-09-20 18:39:32.235778284 +0000 UTC m=+1624.081422886
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-023857 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-023857 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-023857/192.168.39.93
Start Time:       Fri, 20 Sep 2024 18:36:31 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59dqs (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-59dqs:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/sp-pod to functional-023857
Warning  Failed     2m29s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    66s (x2 over 2m28s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     66s (x2 over 2m28s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    55s (x3 over 3m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     12s (x3 over 2m29s)  kubelet            Error: ErrImagePull
Warning  Failed     12s (x2 over 79s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-023857 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-023857 logs sp-pod -n default: exit status 1 (71.711538ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-023857 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-023857 -n functional-023857
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-023857 logs -n 25: (1.547444389s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-023857 ssh stat                                               | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh sudo                                               | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-023857                                                     | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port2890524020/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh findmnt                                            | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh findmnt                                            | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh -- ls                                              | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh sudo                                               | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-023857                                                     | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2424216336/001:/mount1   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-023857                                                     | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2424216336/001:/mount2   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-023857                                                     | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2424216336/001:/mount3   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh findmnt                                            | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh findmnt                                            | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh findmnt                                            | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh findmnt                                            | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-023857                                                     | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| image          | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh pgrep                                              | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-023857 image build -t                                         | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | localhost/my-image:functional-023857                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-023857 image ls                                               | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	| image          | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| update-context | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:36:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:36:37.808076  759778 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:36:37.808339  759778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:37.808349  759778 out.go:358] Setting ErrFile to fd 2...
	I0920 18:36:37.808354  759778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:37.808664  759778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:36:37.809227  759778 out.go:352] Setting JSON to false
	I0920 18:36:37.810275  759778 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8348,"bootTime":1726849050,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:36:37.810374  759778 start.go:139] virtualization: kvm guest
	I0920 18:36:37.812436  759778 out.go:177] * [functional-023857] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0920 18:36:37.813834  759778 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:36:37.813841  759778 notify.go:220] Checking for updates...
	I0920 18:36:37.816359  759778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:36:37.817759  759778 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:36:37.819129  759778 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:36:37.820349  759778 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:36:37.821722  759778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:36:37.823321  759778 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:36:37.823743  759778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:36:37.823814  759778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:36:37.839986  759778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0920 18:36:37.840459  759778 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:36:37.841068  759778 main.go:141] libmachine: Using API Version  1
	I0920 18:36:37.841109  759778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:36:37.841456  759778 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:36:37.841679  759778 main.go:141] libmachine: (functional-023857) Calling .DriverName
	I0920 18:36:37.841937  759778 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:36:37.842235  759778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:36:37.842275  759778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:36:37.857372  759778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I0920 18:36:37.857848  759778 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:36:37.858408  759778 main.go:141] libmachine: Using API Version  1
	I0920 18:36:37.858449  759778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:36:37.858816  759778 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:36:37.859004  759778 main.go:141] libmachine: (functional-023857) Calling .DriverName
	I0920 18:36:37.890577  759778 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0920 18:36:37.891878  759778 start.go:297] selected driver: kvm2
	I0920 18:36:37.891907  759778 start.go:901] validating driver "kvm2" against &{Name:functional-023857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-023857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.93 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:36:37.892031  759778 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:36:37.894176  759778 out.go:201] 
	W0920 18:36:37.895327  759778 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 18:36:37.896536  759778 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.070398002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857573070370687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03953ba2-533e-4d5d-bf44-178a2dd1f702 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.071330150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=954146af-d33f-4638-a315-a0b9164ac47e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.071546165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=954146af-d33f-4638-a315-a0b9164ac47e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.072134946Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98422586941ed5d5be11194f2972f46131b0c8df14c0c534d99ff741cda814ea,PodSandboxId:6086c556deeae280b86a459185f22aa485cfd26959f3adaa1703268446f8680f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726857462581828344,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-xfwwx,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7932711d-4564-4f20-8d18-fe63fd4af7a0,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc46c85979e72ea8eac5af4dfce9b3e03cfc30ee8fe3df005df4eeb431ec96,PodSandboxId:d5820c867eee6bfa759eeda9938e2365b3e482575a22c94df71eb6c3d7d04daa,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1726857457302862821,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-xllhd,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 930cbd4b-8c12-4d96-aa75-d9df7152e4a4,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1603e7fc015d5c007156a7ce65b313c1ca3bade746930b1bcef783bce75b237,PodSandboxId:c4544c07b93ad383d5122b487fdf5d81875899dcd790600e02cc1a5f2773d818,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1726857455176584446,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f272a016-0500-4c42-a245-a79b4aa77359,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7d3f0d5a255544a69ade0d2d4cf74add4317166d9ea7f0331853d61caf151e,PodSandboxId:52adb96704963fcdc89396464d5ecffdb029c4b5a67cf577a46b6352e7ab947b,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388208814640,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-hj76w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 358194f5-7d44-4bf1-9c90-a57f0079a0a3,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e05ec8ac967682862ac9b95c2d52fa754973ce6155a390d10efa87a475b4ec4,PodSandboxId:db5013d2a53b9a757db634861a4f6f5521d78976660ba9a62b685af9a03a24b6,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388128125182,Labels:map[string
]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-7rbf2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7c14f12-78fa-492c-a893-12ea14bbaa08,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073528f9db5cd85f0481e1394beff20af69853f1cfa180605dbd126f7498eb80,PodSandboxId:8089343c80145c45214f6649c9382dfa169a417612275f62079ef83750ee0b74,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857360230513309,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3142ee0cbf3160e4c60db766d0945a4b551e4fdc0e40cb15d0a6f0365a6272b,PodSandboxId:1d3b6dc1d00387ed949d376477e6f4a581da257175320ec78c616b64b66bc95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857356286081419,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292decab062e09b859f1ae460211fd66,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9f0252f7f81f538befe30b3e4415d2a20d2fe40b91c97042907719dae6b9bf,PodSandboxId:02cec729ef1233f6ac1c29d605b339f4c31e41574034614489e745517f8cd0cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857356110599367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aefe896edb12efcaef0b550d11d4d87828e02076446a338c5c0a89c582fefd9,PodSandboxId:a7148b6c1de9c58a6379d7d3b0adac8e64f9af61e28179cba2a697fdb395f013,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857353907318977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a895e49e3fae6b133f7d6971a77f242750e509c1acf7736347302588367441d4,PodSandboxId:b3fdb8eae03bfd3e99258f6c361e224ac86f6b43534c4bf5a473c3dae5e3d426,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857353973066465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830a9e361bd83ba6fb475f6c0ea298beea967a4a82d79c6789d50383ddee292c,PodSandboxId:07999a46ed097bcef6278d13465951b8e7555fc0e5a6b40b2269155d1c301284,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857353959384074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fc73baf20f3be1d1149f23bf9ae1ab71652015c84212a13727924f856adc73,PodSandboxId:50100e03d3e9596d30816dd08ca880de56ab43f4d843f2af5803f5d671537952,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca
5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857353854905932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4e5a788dc92fc0c871582d643d9a32e3397862a66072436d9e059c810cc4a9,PodSandboxId:2f9a5fddff00d0fbf043e38455556fb698c2754aff84a3ae392dd87f4d5cb80a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State
:CONTAINER_EXITED,CreatedAt:1726857316346831089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de751dc72438df7c2e75ead88ead0ddcf00fa24dece094bfb9b3663fd5324641,PodSandboxId:b9398e2cc0bd5dfd5082c0ba17301e83dc83138e67e28d2323f9772495d423cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d
131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726857316321593818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886699ad63bba9d38ef13eb8597a566114a9ed695dd0a63c2b15c14676c6ba9,PodSandboxId:1a186182398e96144d56e47f5266152b92901106c457ca6f9928c860e995a8fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d86
7d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726857310383548384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29bad7e051fededdb23e1af642018dcf912edffbdbd38499fe84826cf6dae7d9,PodSandboxId:630c1a13b7bbb3c357c023af1c958f9094731aa73e9b3eac1398766107c2bc36,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726857310291259120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d92ff3168d16da22d6afd2037de4213d705f7e7f3e8b757785a2789640e33b5,PodSandboxId:fc953c229a27103bc550e0e93e63df725cc792bcba7c5af415d1c5c55cb75016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726857310232778586,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9dc2db11ced2fc4130cfa386ec557a5e2809bc92c435a2ab2ca9ecdcf610fd2,PodSandboxId:bd5db08ef37cff584d43c66d5a73c8496b41a19cd04eab773eed6912311c843b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726857310261104883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=954146af-d33f-4638-a315-a0b9164ac47e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.115509281Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f41c25c5-aa52-4896-8878-4625a1c958a7 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.115579859Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f41c25c5-aa52-4896-8878-4625a1c958a7 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.117394844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3dbb6930-247f-4dd5-a1c0-4bf8240d399f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.118164276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857573118138639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3dbb6930-247f-4dd5-a1c0-4bf8240d399f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.119144375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be149c2c-9ef1-4885-a848-63ee57bb79d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.119203448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be149c2c-9ef1-4885-a848-63ee57bb79d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.119563121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98422586941ed5d5be11194f2972f46131b0c8df14c0c534d99ff741cda814ea,PodSandboxId:6086c556deeae280b86a459185f22aa485cfd26959f3adaa1703268446f8680f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726857462581828344,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-xfwwx,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7932711d-4564-4f20-8d18-fe63fd4af7a0,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc46c85979e72ea8eac5af4dfce9b3e03cfc30ee8fe3df005df4eeb431ec96,PodSandboxId:d5820c867eee6bfa759eeda9938e2365b3e482575a22c94df71eb6c3d7d04daa,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1726857457302862821,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-xllhd,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 930cbd4b-8c12-4d96-aa75-d9df7152e4a4,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1603e7fc015d5c007156a7ce65b313c1ca3bade746930b1bcef783bce75b237,PodSandboxId:c4544c07b93ad383d5122b487fdf5d81875899dcd790600e02cc1a5f2773d818,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1726857455176584446,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f272a016-0500-4c42-a245-a79b4aa77359,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7d3f0d5a255544a69ade0d2d4cf74add4317166d9ea7f0331853d61caf151e,PodSandboxId:52adb96704963fcdc89396464d5ecffdb029c4b5a67cf577a46b6352e7ab947b,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388208814640,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-hj76w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 358194f5-7d44-4bf1-9c90-a57f0079a0a3,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e05ec8ac967682862ac9b95c2d52fa754973ce6155a390d10efa87a475b4ec4,PodSandboxId:db5013d2a53b9a757db634861a4f6f5521d78976660ba9a62b685af9a03a24b6,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388128125182,Labels:map[string
]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-7rbf2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7c14f12-78fa-492c-a893-12ea14bbaa08,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073528f9db5cd85f0481e1394beff20af69853f1cfa180605dbd126f7498eb80,PodSandboxId:8089343c80145c45214f6649c9382dfa169a417612275f62079ef83750ee0b74,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857360230513309,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3142ee0cbf3160e4c60db766d0945a4b551e4fdc0e40cb15d0a6f0365a6272b,PodSandboxId:1d3b6dc1d00387ed949d376477e6f4a581da257175320ec78c616b64b66bc95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857356286081419,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292decab062e09b859f1ae460211fd66,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9f0252f7f81f538befe30b3e4415d2a20d2fe40b91c97042907719dae6b9bf,PodSandboxId:02cec729ef1233f6ac1c29d605b339f4c31e41574034614489e745517f8cd0cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857356110599367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aefe896edb12efcaef0b550d11d4d87828e02076446a338c5c0a89c582fefd9,PodSandboxId:a7148b6c1de9c58a6379d7d3b0adac8e64f9af61e28179cba2a697fdb395f013,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857353907318977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a895e49e3fae6b133f7d6971a77f242750e509c1acf7736347302588367441d4,PodSandboxId:b3fdb8eae03bfd3e99258f6c361e224ac86f6b43534c4bf5a473c3dae5e3d426,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857353973066465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830a9e361bd83ba6fb475f6c0ea298beea967a4a82d79c6789d50383ddee292c,PodSandboxId:07999a46ed097bcef6278d13465951b8e7555fc0e5a6b40b2269155d1c301284,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857353959384074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fc73baf20f3be1d1149f23bf9ae1ab71652015c84212a13727924f856adc73,PodSandboxId:50100e03d3e9596d30816dd08ca880de56ab43f4d843f2af5803f5d671537952,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca
5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857353854905932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4e5a788dc92fc0c871582d643d9a32e3397862a66072436d9e059c810cc4a9,PodSandboxId:2f9a5fddff00d0fbf043e38455556fb698c2754aff84a3ae392dd87f4d5cb80a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State
:CONTAINER_EXITED,CreatedAt:1726857316346831089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de751dc72438df7c2e75ead88ead0ddcf00fa24dece094bfb9b3663fd5324641,PodSandboxId:b9398e2cc0bd5dfd5082c0ba17301e83dc83138e67e28d2323f9772495d423cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d
131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726857316321593818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886699ad63bba9d38ef13eb8597a566114a9ed695dd0a63c2b15c14676c6ba9,PodSandboxId:1a186182398e96144d56e47f5266152b92901106c457ca6f9928c860e995a8fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d86
7d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726857310383548384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29bad7e051fededdb23e1af642018dcf912edffbdbd38499fe84826cf6dae7d9,PodSandboxId:630c1a13b7bbb3c357c023af1c958f9094731aa73e9b3eac1398766107c2bc36,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726857310291259120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d92ff3168d16da22d6afd2037de4213d705f7e7f3e8b757785a2789640e33b5,PodSandboxId:fc953c229a27103bc550e0e93e63df725cc792bcba7c5af415d1c5c55cb75016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726857310232778586,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9dc2db11ced2fc4130cfa386ec557a5e2809bc92c435a2ab2ca9ecdcf610fd2,PodSandboxId:bd5db08ef37cff584d43c66d5a73c8496b41a19cd04eab773eed6912311c843b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726857310261104883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be149c2c-9ef1-4885-a848-63ee57bb79d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.161410987Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=882f0320-166d-4c16-a983-5b33769e7d1a name=/runtime.v1.RuntimeService/Version
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.161994042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=882f0320-166d-4c16-a983-5b33769e7d1a name=/runtime.v1.RuntimeService/Version
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.164227994Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad8faa72-1901-4188-883a-9d60ef04b802 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.165280425Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857573165248599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad8faa72-1901-4188-883a-9d60ef04b802 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.166215822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e19808e7-4bf0-407c-9a0e-78657c359b57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.166268643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e19808e7-4bf0-407c-9a0e-78657c359b57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.166643299Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98422586941ed5d5be11194f2972f46131b0c8df14c0c534d99ff741cda814ea,PodSandboxId:6086c556deeae280b86a459185f22aa485cfd26959f3adaa1703268446f8680f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726857462581828344,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-xfwwx,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7932711d-4564-4f20-8d18-fe63fd4af7a0,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc46c85979e72ea8eac5af4dfce9b3e03cfc30ee8fe3df005df4eeb431ec96,PodSandboxId:d5820c867eee6bfa759eeda9938e2365b3e482575a22c94df71eb6c3d7d04daa,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1726857457302862821,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-xllhd,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 930cbd4b-8c12-4d96-aa75-d9df7152e4a4,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1603e7fc015d5c007156a7ce65b313c1ca3bade746930b1bcef783bce75b237,PodSandboxId:c4544c07b93ad383d5122b487fdf5d81875899dcd790600e02cc1a5f2773d818,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1726857455176584446,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f272a016-0500-4c42-a245-a79b4aa77359,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7d3f0d5a255544a69ade0d2d4cf74add4317166d9ea7f0331853d61caf151e,PodSandboxId:52adb96704963fcdc89396464d5ecffdb029c4b5a67cf577a46b6352e7ab947b,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388208814640,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-hj76w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 358194f5-7d44-4bf1-9c90-a57f0079a0a3,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e05ec8ac967682862ac9b95c2d52fa754973ce6155a390d10efa87a475b4ec4,PodSandboxId:db5013d2a53b9a757db634861a4f6f5521d78976660ba9a62b685af9a03a24b6,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388128125182,Labels:map[string
]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-7rbf2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7c14f12-78fa-492c-a893-12ea14bbaa08,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073528f9db5cd85f0481e1394beff20af69853f1cfa180605dbd126f7498eb80,PodSandboxId:8089343c80145c45214f6649c9382dfa169a417612275f62079ef83750ee0b74,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857360230513309,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3142ee0cbf3160e4c60db766d0945a4b551e4fdc0e40cb15d0a6f0365a6272b,PodSandboxId:1d3b6dc1d00387ed949d376477e6f4a581da257175320ec78c616b64b66bc95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857356286081419,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292decab062e09b859f1ae460211fd66,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9f0252f7f81f538befe30b3e4415d2a20d2fe40b91c97042907719dae6b9bf,PodSandboxId:02cec729ef1233f6ac1c29d605b339f4c31e41574034614489e745517f8cd0cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857356110599367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aefe896edb12efcaef0b550d11d4d87828e02076446a338c5c0a89c582fefd9,PodSandboxId:a7148b6c1de9c58a6379d7d3b0adac8e64f9af61e28179cba2a697fdb395f013,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857353907318977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a895e49e3fae6b133f7d6971a77f242750e509c1acf7736347302588367441d4,PodSandboxId:b3fdb8eae03bfd3e99258f6c361e224ac86f6b43534c4bf5a473c3dae5e3d426,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857353973066465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830a9e361bd83ba6fb475f6c0ea298beea967a4a82d79c6789d50383ddee292c,PodSandboxId:07999a46ed097bcef6278d13465951b8e7555fc0e5a6b40b2269155d1c301284,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857353959384074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fc73baf20f3be1d1149f23bf9ae1ab71652015c84212a13727924f856adc73,PodSandboxId:50100e03d3e9596d30816dd08ca880de56ab43f4d843f2af5803f5d671537952,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca
5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857353854905932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4e5a788dc92fc0c871582d643d9a32e3397862a66072436d9e059c810cc4a9,PodSandboxId:2f9a5fddff00d0fbf043e38455556fb698c2754aff84a3ae392dd87f4d5cb80a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State
:CONTAINER_EXITED,CreatedAt:1726857316346831089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de751dc72438df7c2e75ead88ead0ddcf00fa24dece094bfb9b3663fd5324641,PodSandboxId:b9398e2cc0bd5dfd5082c0ba17301e83dc83138e67e28d2323f9772495d423cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d
131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726857316321593818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886699ad63bba9d38ef13eb8597a566114a9ed695dd0a63c2b15c14676c6ba9,PodSandboxId:1a186182398e96144d56e47f5266152b92901106c457ca6f9928c860e995a8fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d86
7d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726857310383548384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29bad7e051fededdb23e1af642018dcf912edffbdbd38499fe84826cf6dae7d9,PodSandboxId:630c1a13b7bbb3c357c023af1c958f9094731aa73e9b3eac1398766107c2bc36,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726857310291259120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d92ff3168d16da22d6afd2037de4213d705f7e7f3e8b757785a2789640e33b5,PodSandboxId:fc953c229a27103bc550e0e93e63df725cc792bcba7c5af415d1c5c55cb75016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726857310232778586,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9dc2db11ced2fc4130cfa386ec557a5e2809bc92c435a2ab2ca9ecdcf610fd2,PodSandboxId:bd5db08ef37cff584d43c66d5a73c8496b41a19cd04eab773eed6912311c843b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726857310261104883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e19808e7-4bf0-407c-9a0e-78657c359b57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.235620268Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=917859f1-6a51-45a5-ba28-e3015683b2a0 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.235782647Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=917859f1-6a51-45a5-ba28-e3015683b2a0 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.239720404Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7a7682c-8977-4581-86a2-408437707b35 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.240736366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857573240698468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7a7682c-8977-4581-86a2-408437707b35 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.241484190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a945afc5-32fb-4678-b6e8-f64b38b23f35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.241541836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a945afc5-32fb-4678-b6e8-f64b38b23f35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:39:33 functional-023857 crio[4715]: time="2024-09-20 18:39:33.242052328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98422586941ed5d5be11194f2972f46131b0c8df14c0c534d99ff741cda814ea,PodSandboxId:6086c556deeae280b86a459185f22aa485cfd26959f3adaa1703268446f8680f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726857462581828344,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-xfwwx,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7932711d-4564-4f20-8d18-fe63fd4af7a0,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc46c85979e72ea8eac5af4dfce9b3e03cfc30ee8fe3df005df4eeb431ec96,PodSandboxId:d5820c867eee6bfa759eeda9938e2365b3e482575a22c94df71eb6c3d7d04daa,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1726857457302862821,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-xllhd,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 930cbd4b-8c12-4d96-aa75-d9df7152e4a4,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1603e7fc015d5c007156a7ce65b313c1ca3bade746930b1bcef783bce75b237,PodSandboxId:c4544c07b93ad383d5122b487fdf5d81875899dcd790600e02cc1a5f2773d818,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1726857455176584446,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f272a016-0500-4c42-a245-a79b4aa77359,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7d3f0d5a255544a69ade0d2d4cf74add4317166d9ea7f0331853d61caf151e,PodSandboxId:52adb96704963fcdc89396464d5ecffdb029c4b5a67cf577a46b6352e7ab947b,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388208814640,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-hj76w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 358194f5-7d44-4bf1-9c90-a57f0079a0a3,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e05ec8ac967682862ac9b95c2d52fa754973ce6155a390d10efa87a475b4ec4,PodSandboxId:db5013d2a53b9a757db634861a4f6f5521d78976660ba9a62b685af9a03a24b6,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388128125182,Labels:map[string
]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-7rbf2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7c14f12-78fa-492c-a893-12ea14bbaa08,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073528f9db5cd85f0481e1394beff20af69853f1cfa180605dbd126f7498eb80,PodSandboxId:8089343c80145c45214f6649c9382dfa169a417612275f62079ef83750ee0b74,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857360230513309,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3142ee0cbf3160e4c60db766d0945a4b551e4fdc0e40cb15d0a6f0365a6272b,PodSandboxId:1d3b6dc1d00387ed949d376477e6f4a581da257175320ec78c616b64b66bc95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857356286081419,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292decab062e09b859f1ae460211fd66,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9f0252f7f81f538befe30b3e4415d2a20d2fe40b91c97042907719dae6b9bf,PodSandboxId:02cec729ef1233f6ac1c29d605b339f4c31e41574034614489e745517f8cd0cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857356110599367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aefe896edb12efcaef0b550d11d4d87828e02076446a338c5c0a89c582fefd9,PodSandboxId:a7148b6c1de9c58a6379d7d3b0adac8e64f9af61e28179cba2a697fdb395f013,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857353907318977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a895e49e3fae6b133f7d6971a77f242750e509c1acf7736347302588367441d4,PodSandboxId:b3fdb8eae03bfd3e99258f6c361e224ac86f6b43534c4bf5a473c3dae5e3d426,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857353973066465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830a9e361bd83ba6fb475f6c0ea298beea967a4a82d79c6789d50383ddee292c,PodSandboxId:07999a46ed097bcef6278d13465951b8e7555fc0e5a6b40b2269155d1c301284,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857353959384074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fc73baf20f3be1d1149f23bf9ae1ab71652015c84212a13727924f856adc73,PodSandboxId:50100e03d3e9596d30816dd08ca880de56ab43f4d843f2af5803f5d671537952,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca
5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857353854905932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4e5a788dc92fc0c871582d643d9a32e3397862a66072436d9e059c810cc4a9,PodSandboxId:2f9a5fddff00d0fbf043e38455556fb698c2754aff84a3ae392dd87f4d5cb80a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State
:CONTAINER_EXITED,CreatedAt:1726857316346831089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de751dc72438df7c2e75ead88ead0ddcf00fa24dece094bfb9b3663fd5324641,PodSandboxId:b9398e2cc0bd5dfd5082c0ba17301e83dc83138e67e28d2323f9772495d423cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d
131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726857316321593818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886699ad63bba9d38ef13eb8597a566114a9ed695dd0a63c2b15c14676c6ba9,PodSandboxId:1a186182398e96144d56e47f5266152b92901106c457ca6f9928c860e995a8fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d86
7d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726857310383548384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29bad7e051fededdb23e1af642018dcf912edffbdbd38499fe84826cf6dae7d9,PodSandboxId:630c1a13b7bbb3c357c023af1c958f9094731aa73e9b3eac1398766107c2bc36,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726857310291259120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d92ff3168d16da22d6afd2037de4213d705f7e7f3e8b757785a2789640e33b5,PodSandboxId:fc953c229a27103bc550e0e93e63df725cc792bcba7c5af415d1c5c55cb75016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726857310232778586,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9dc2db11ced2fc4130cfa386ec557a5e2809bc92c435a2ab2ca9ecdcf610fd2,PodSandboxId:bd5db08ef37cff584d43c66d5a73c8496b41a19cd04eab773eed6912311c843b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726857310261104883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a945afc5-32fb-4678-b6e8-f64b38b23f35 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	98422586941ed       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   6086c556deeae       kubernetes-dashboard-695b96c756-xfwwx
	b6cc46c85979e       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   About a minute ago   Running             dashboard-metrics-scraper   0                   d5820c867eee6       dashboard-metrics-scraper-c5db448b4-xllhd
	c1603e7fc015d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              About a minute ago   Exited              mount-munger                0                   c4544c07b93ad       busybox-mount
	2b7d3f0d5a255       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago        Running             echoserver                  0                   52adb96704963       hello-node-connect-67bdd5bbb4-hj76w
	3e05ec8ac9676       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago        Running             echoserver                  0                   db5013d2a53b9       hello-node-6b9f76b5c7-7rbf2
	073528f9db5cd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago        Running             coredns                     2                   8089343c80145       coredns-7c65d6cfc9-v2dmd
	c3142ee0cbf31       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 3 minutes ago        Running             kube-apiserver              0                   1d3b6dc1d0038       kube-apiserver-functional-023857
	de9f0252f7f81       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 3 minutes ago        Running             kube-controller-manager     2                   02cec729ef123       kube-controller-manager-functional-023857
	a895e49e3fae6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Running             storage-provisioner         2                   b3fdb8eae03bf       storage-provisioner
	830a9e361bd83       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 3 minutes ago        Running             kube-proxy                  2                   07999a46ed097       kube-proxy-k6hl6
	3aefe896edb12       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 3 minutes ago        Running             kube-scheduler              2                   a7148b6c1de9c       kube-scheduler-functional-023857
	14fc73baf20f3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 3 minutes ago        Running             etcd                        2                   50100e03d3e95       etcd-functional-023857
	0c4e5a788dc92       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago        Exited              coredns                     1                   2f9a5fddff00d       coredns-7c65d6cfc9-v2dmd
	de751dc72438d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 4 minutes ago        Exited              kube-proxy                  1                   b9398e2cc0bd5       kube-proxy-k6hl6
	4886699ad63bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago        Exited              storage-provisioner         1                   1a186182398e9       storage-provisioner
	29bad7e051fed       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 4 minutes ago        Exited              etcd                        1                   630c1a13b7bbb       etcd-functional-023857
	e9dc2db11ced2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 4 minutes ago        Exited              kube-controller-manager     1                   bd5db08ef37cf       kube-controller-manager-functional-023857
	8d92ff3168d16       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 4 minutes ago        Exited              kube-scheduler              1                   fc953c229a271       kube-scheduler-functional-023857
	
	
	==> coredns [073528f9db5cd85f0481e1394beff20af69853f1cfa180605dbd126f7498eb80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47151 - 32373 "HINFO IN 7303210622316958186.61452939891773212. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.029454658s
	
	
	==> coredns [0c4e5a788dc92fc0c871582d643d9a32e3397862a66072436d9e059c810cc4a9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53602 - 44332 "HINFO IN 1581926696694530428.1537307737197261760. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033211115s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-023857
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-023857
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=functional-023857
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_34_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:34:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-023857
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:39:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:38:01 +0000   Fri, 20 Sep 2024 18:34:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:38:01 +0000   Fri, 20 Sep 2024 18:34:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:38:01 +0000   Fri, 20 Sep 2024 18:34:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:38:01 +0000   Fri, 20 Sep 2024 18:34:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.93
	  Hostname:    functional-023857
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 17227d3d682e46e29ecf58fda0a4b07a
	  System UUID:                17227d3d-682e-46e2-9ecf-58fda0a4b07a
	  Boot ID:                    a2770fe1-7030-4dea-b57f-b423181af6b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-7rbf2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     hello-node-connect-67bdd5bbb4-hj76w          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     mysql-6cdb49bbb-p6v98                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    2m58s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-7c65d6cfc9-v2dmd                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m49s
	  kube-system                 etcd-functional-023857                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m55s
	  kube-system                 kube-apiserver-functional-023857             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 kube-controller-manager-functional-023857    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-k6hl6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-scheduler-functional-023857             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-xllhd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-xfwwx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m48s                  kube-proxy       
	  Normal  Starting                 3m33s                  kube-proxy       
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m55s                  kubelet          Node functional-023857 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m55s                  kubelet          Node functional-023857 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s                  kubelet          Node functional-023857 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m55s                  kubelet          Starting kubelet.
	  Normal  NodeReady                4m54s                  kubelet          Node functional-023857 status is now: NodeReady
	  Normal  RegisteredNode           4m51s                  node-controller  Node functional-023857 event: Registered Node functional-023857 in Controller
	  Normal  Starting                 4m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    4m21s (x8 over 4m21s)  kubelet          Node functional-023857 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m21s (x8 over 4m21s)  kubelet          Node functional-023857 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m21s (x7 over 4m21s)  kubelet          Node functional-023857 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m14s                  node-controller  Node functional-023857 event: Registered Node functional-023857 in Controller
	  Normal  Starting                 3m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m38s (x8 over 3m38s)  kubelet          Node functional-023857 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m38s (x8 over 3m38s)  kubelet          Node functional-023857 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m38s (x7 over 3m38s)  kubelet          Node functional-023857 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m31s                  node-controller  Node functional-023857 event: Registered Node functional-023857 in Controller
	
	
	==> dmesg <==
	[  +0.265431] systemd-fstab-generator[2430]: Ignoring "noauto" option for root device
	[Sep20 18:35] systemd-fstab-generator[2550]: Ignoring "noauto" option for root device
	[  +0.072754] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.497558] systemd-fstab-generator[3096]: Ignoring "noauto" option for root device
	[  +4.611767] kauditd_printk_skb: 107 callbacks suppressed
	[ +12.672607] systemd-fstab-generator[3439]: Ignoring "noauto" option for root device
	[  +0.095053] kauditd_printk_skb: 6 callbacks suppressed
	[ +17.198695] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.749016] systemd-fstab-generator[4642]: Ignoring "noauto" option for root device
	[  +0.134127] systemd-fstab-generator[4654]: Ignoring "noauto" option for root device
	[  +0.150564] systemd-fstab-generator[4668]: Ignoring "noauto" option for root device
	[  +0.133078] systemd-fstab-generator[4680]: Ignoring "noauto" option for root device
	[  +0.255245] systemd-fstab-generator[4708]: Ignoring "noauto" option for root device
	[  +5.025928] systemd-fstab-generator[4906]: Ignoring "noauto" option for root device
	[  +0.078378] kauditd_printk_skb: 155 callbacks suppressed
	[  +2.484663] systemd-fstab-generator[5354]: Ignoring "noauto" option for root device
	[  +4.324636] kauditd_printk_skb: 122 callbacks suppressed
	[Sep20 18:36] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.075876] systemd-fstab-generator[5944]: Ignoring "noauto" option for root device
	[  +6.321716] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.606645] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.636728] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.656846] kauditd_printk_skb: 18 callbacks suppressed
	[ +17.954536] kauditd_printk_skb: 32 callbacks suppressed
	[Sep20 18:37] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [14fc73baf20f3be1d1149f23bf9ae1ab71652015c84212a13727924f856adc73] <==
	{"level":"info","ts":"2024-09-20T18:35:57.550248Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:35:57.550439Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:35:57.550479Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:35:57.550501Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:35:57.551622Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:35:57.551637Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:35:57.552560Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.93:2379"}
	{"level":"info","ts":"2024-09-20T18:35:57.552647Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-20T18:36:29.342522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.026303ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:36:29.342618Z","caller":"traceutil/trace.go:171","msg":"trace[1091189570] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:687; }","duration":"172.174449ms","start":"2024-09-20T18:36:29.170429Z","end":"2024-09-20T18:36:29.342604Z","steps":["trace[1091189570] 'range keys from in-memory index tree'  (duration: 171.94548ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:36:29.342737Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.520179ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:36:29.342776Z","caller":"traceutil/trace.go:171","msg":"trace[1254777371] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:687; }","duration":"160.559986ms","start":"2024-09-20T18:36:29.182207Z","end":"2024-09-20T18:36:29.342767Z","steps":["trace[1254777371] 'range keys from in-memory index tree'  (duration: 160.47841ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:36:29.342904Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.333615ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:36:29.342985Z","caller":"traceutil/trace.go:171","msg":"trace[1398342944] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:687; }","duration":"133.415561ms","start":"2024-09-20T18:36:29.209563Z","end":"2024-09-20T18:36:29.342979Z","steps":["trace[1398342944] 'range keys from in-memory index tree'  (duration: 133.32787ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:37:42.389690Z","caller":"traceutil/trace.go:171","msg":"trace[1241023795] linearizableReadLoop","detail":"{readStateIndex:948; appliedIndex:947; }","duration":"429.296133ms","start":"2024-09-20T18:37:41.960367Z","end":"2024-09-20T18:37:42.389664Z","steps":["trace[1241023795] 'read index received'  (duration: 429.178833ms)","trace[1241023795] 'applied index is now lower than readState.Index'  (duration: 116.833µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:37:42.389871Z","caller":"traceutil/trace.go:171","msg":"trace[217195296] transaction","detail":"{read_only:false; response_revision:874; number_of_response:1; }","duration":"432.277027ms","start":"2024-09-20T18:37:41.957582Z","end":"2024-09-20T18:37:42.389859Z","steps":["trace[217195296] 'process raft request'  (duration: 431.972745ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:37:42.390118Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.298342ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:37:42.390159Z","caller":"traceutil/trace.go:171","msg":"trace[1456665782] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:874; }","duration":"180.362751ms","start":"2024-09-20T18:37:42.209788Z","end":"2024-09-20T18:37:42.390150Z","steps":["trace[1456665782] 'agreement among raft nodes before linearized reading'  (duration: 180.280373ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:37:42.390283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"429.909832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:37:42.390299Z","caller":"traceutil/trace.go:171","msg":"trace[72418354] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:874; }","duration":"429.929923ms","start":"2024-09-20T18:37:41.960364Z","end":"2024-09-20T18:37:42.390294Z","steps":["trace[72418354] 'agreement among raft nodes before linearized reading'  (duration: 429.895776ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:37:42.390321Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:37:41.960340Z","time spent":"429.97324ms","remote":"127.0.0.1:56810","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-20T18:37:42.390441Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:37:41.957568Z","time spent":"432.323162ms","remote":"127.0.0.1:56800","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:870 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-20T18:37:42.390579Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.842582ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:37:42.390596Z","caller":"traceutil/trace.go:171","msg":"trace[2003012570] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:874; }","duration":"104.861024ms","start":"2024-09-20T18:37:42.285730Z","end":"2024-09-20T18:37:42.390591Z","steps":["trace[2003012570] 'agreement among raft nodes before linearized reading'  (duration: 104.830889ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:38:12.788436Z","caller":"traceutil/trace.go:171","msg":"trace[534785189] transaction","detail":"{read_only:false; response_revision:911; number_of_response:1; }","duration":"247.973979ms","start":"2024-09-20T18:38:12.540439Z","end":"2024-09-20T18:38:12.788413Z","steps":["trace[534785189] 'process raft request'  (duration: 247.876705ms)"],"step_count":1}
	
	
	==> etcd [29bad7e051fededdb23e1af642018dcf912edffbdbd38499fe84826cf6dae7d9] <==
	{"level":"info","ts":"2024-09-20T18:35:14.571828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74d5eee0c2cff883 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:35:14.571866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74d5eee0c2cff883 received MsgPreVoteResp from 74d5eee0c2cff883 at term 2"}
	{"level":"info","ts":"2024-09-20T18:35:14.571884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74d5eee0c2cff883 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:35:14.571890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74d5eee0c2cff883 received MsgVoteResp from 74d5eee0c2cff883 at term 3"}
	{"level":"info","ts":"2024-09-20T18:35:14.571898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74d5eee0c2cff883 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:35:14.571905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 74d5eee0c2cff883 elected leader 74d5eee0c2cff883 at term 3"}
	{"level":"info","ts":"2024-09-20T18:35:14.577349Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"74d5eee0c2cff883","local-member-attributes":"{Name:functional-023857 ClientURLs:[https://192.168.39.93:2379]}","request-path":"/0/members/74d5eee0c2cff883/attributes","cluster-id":"72f745f8ab51fb0b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:35:14.577393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:35:14.577597Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:35:14.577664Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:35:14.577670Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:35:14.578269Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:35:14.578540Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:35:14.579111Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:35:14.579404Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.93:2379"}
	{"level":"info","ts":"2024-09-20T18:35:40.702414Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T18:35:40.702474Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-023857","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.93:2380"],"advertise-client-urls":["https://192.168.39.93:2379"]}
	{"level":"warn","ts":"2024-09-20T18:35:40.702538Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:35:40.702626Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:35:40.804202Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.93:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:35:40.804259Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.93:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:35:40.804342Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"74d5eee0c2cff883","current-leader-member-id":"74d5eee0c2cff883"}
	{"level":"info","ts":"2024-09-20T18:35:40.808073Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.93:2380"}
	{"level":"info","ts":"2024-09-20T18:35:40.808358Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.93:2380"}
	{"level":"info","ts":"2024-09-20T18:35:40.808394Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-023857","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.93:2380"],"advertise-client-urls":["https://192.168.39.93:2379"]}
	
	
	==> kernel <==
	 18:39:33 up 5 min,  0 users,  load average: 0.25, 0.27, 0.13
	Linux functional-023857 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c3142ee0cbf3160e4c60db766d0945a4b551e4fdc0e40cb15d0a6f0365a6272b] <==
	I0920 18:35:58.919457       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:35:58.919474       1 policy_source.go:224] refreshing policies
	I0920 18:35:58.923856       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:35:58.925200       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:35:58.925283       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:35:58.925598       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:35:58.925643       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:35:58.929990       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 18:35:59.016546       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:35:59.746353       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 18:36:00.435409       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:36:00.451144       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:36:00.489630       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:36:00.528892       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:36:00.536506       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 18:36:02.448453       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:36:02.548373       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:36:20.038036       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.172.34"}
	I0920 18:36:24.037842       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0920 18:36:24.160995       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.125.164"}
	I0920 18:36:25.628824       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.15.37"}
	I0920 18:36:35.241655       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.221.41"}
	I0920 18:36:38.908463       1 controller.go:615] quota admission added evaluator for: namespaces
	I0920 18:36:39.203892       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.159.17"}
	I0920 18:36:39.235119       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.19.255"}
	
	
	==> kube-controller-manager [de9f0252f7f81f538befe30b3e4415d2a20d2fe40b91c97042907719dae6b9bf] <==
	E0920 18:36:39.059579       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0920 18:36:39.059616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.183685ms"
	E0920 18:36:39.059625       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0920 18:36:39.067448       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.408128ms"
	E0920 18:36:39.067491       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0920 18:36:39.067595       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.266721ms"
	E0920 18:36:39.067634       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0920 18:36:39.107827       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="26.346464ms"
	I0920 18:36:39.132876       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="51.15471ms"
	I0920 18:36:39.157260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="49.21752ms"
	I0920 18:36:39.157359       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="50.466µs"
	I0920 18:36:39.171315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="38.160252ms"
	I0920 18:36:39.171480       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="50.322µs"
	I0920 18:36:39.173332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="34.355µs"
	I0920 18:36:39.204334       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="56.615µs"
	I0920 18:36:59.954269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-023857"
	I0920 18:37:34.329827       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="164.627µs"
	I0920 18:37:38.374601       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.346605ms"
	I0920 18:37:38.374757       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="36.449µs"
	I0920 18:37:43.403327       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="17.051553ms"
	I0920 18:37:43.404435       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="46.98µs"
	I0920 18:37:45.692451       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="114.383µs"
	I0920 18:38:01.334825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-023857"
	I0920 18:39:05.693816       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="207.849µs"
	I0920 18:39:17.690363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="93.217µs"
	
	
	==> kube-controller-manager [e9dc2db11ced2fc4130cfa386ec557a5e2809bc92c435a2ab2ca9ecdcf610fd2] <==
	I0920 18:35:19.387745       1 shared_informer.go:320] Caches are synced for persistent volume
	I0920 18:35:19.393413       1 shared_informer.go:320] Caches are synced for GC
	I0920 18:35:19.417048       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:35:19.418263       1 shared_informer.go:320] Caches are synced for daemon sets
	I0920 18:35:19.421240       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0920 18:35:19.423120       1 shared_informer.go:320] Caches are synced for stateful set
	I0920 18:35:19.427535       1 shared_informer.go:320] Caches are synced for ephemeral
	I0920 18:35:19.437059       1 shared_informer.go:320] Caches are synced for disruption
	I0920 18:35:19.437282       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0920 18:35:19.437351       1 shared_informer.go:320] Caches are synced for PVC protection
	I0920 18:35:19.437363       1 shared_informer.go:320] Caches are synced for attach detach
	I0920 18:35:19.437371       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0920 18:35:19.437384       1 shared_informer.go:320] Caches are synced for taint
	I0920 18:35:19.438987       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0920 18:35:19.440990       1 shared_informer.go:320] Caches are synced for HPA
	I0920 18:35:19.439129       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-023857"
	I0920 18:35:19.441678       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0920 18:35:19.444092       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:35:19.447275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="25.931922ms"
	I0920 18:35:19.448362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="38.988µs"
	I0920 18:35:19.865895       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:35:19.865989       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 18:35:19.884412       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:35:21.226645       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.581405ms"
	I0920 18:35:21.227613       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="37.089µs"
	
	
	==> kube-proxy [830a9e361bd83ba6fb475f6c0ea298beea967a4a82d79c6789d50383ddee292c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:36:00.238979       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:36:00.249250       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.93"]
	E0920 18:36:00.249453       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:36:00.382732       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:36:00.382796       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:36:00.382820       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:36:00.388124       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:36:00.388403       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:36:00.388433       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:36:00.390201       1 config.go:199] "Starting service config controller"
	I0920 18:36:00.390235       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:36:00.390264       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:36:00.390284       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:36:00.390621       1 config.go:328] "Starting node config controller"
	I0920 18:36:00.390650       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:36:00.490777       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:36:00.490820       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:36:00.490840       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [de751dc72438df7c2e75ead88ead0ddcf00fa24dece094bfb9b3663fd5324641] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:35:16.537307       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:35:16.545052       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.93"]
	E0920 18:35:16.545119       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:35:16.581543       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:35:16.581622       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:35:16.581655       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:35:16.584109       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:35:16.584352       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:35:16.584523       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:35:16.588103       1 config.go:199] "Starting service config controller"
	I0920 18:35:16.588165       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:35:16.588230       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:35:16.588247       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:35:16.588815       1 config.go:328] "Starting node config controller"
	I0920 18:35:16.588850       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:35:16.689173       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:35:16.689210       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:35:16.689224       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3aefe896edb12efcaef0b550d11d4d87828e02076446a338c5c0a89c582fefd9] <==
	I0920 18:35:57.070969       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:35:58.790018       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:35:58.790112       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:35:58.790122       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:35:58.790127       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:35:58.837959       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:35:58.838043       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:35:58.840042       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:35:58.840133       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:35:58.840176       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:35:58.840281       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:35:58.941665       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8d92ff3168d16da22d6afd2037de4213d705f7e7f3e8b757785a2789640e33b5] <==
	I0920 18:35:12.840959       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:35:15.856114       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:35:15.856215       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:35:15.856225       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:35:15.856231       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:35:15.907031       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:35:15.907125       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:35:15.911497       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:35:15.911587       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:35:15.911603       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:35:15.911627       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:35:16.012554       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:35:40.689387       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0920 18:35:40.689476       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0920 18:35:40.690112       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0920 18:35:40.690416       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 20 18:38:45 functional-023857 kubelet[5361]: E0920 18:38:45.796561    5361 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857525796230454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:45 functional-023857 kubelet[5361]: E0920 18:38:45.797843    5361 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857525796230454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:50 functional-023857 kubelet[5361]: E0920 18:38:50.250532    5361 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 20 18:38:50 functional-023857 kubelet[5361]: E0920 18:38:50.250591    5361 kuberuntime_image.go:55] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 20 18:38:50 functional-023857 kubelet[5361]: E0920 18:38:50.252035    5361 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsfmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-6cdb49bbb-p6v98_default(1df5da07-a4a6-40cb-85a8-25375612c542): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 20 18:38:50 functional-023857 kubelet[5361]: E0920 18:38:50.253497    5361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-6cdb49bbb-p6v98" podUID="1df5da07-a4a6-40cb-85a8-25375612c542"
	Sep 20 18:38:55 functional-023857 kubelet[5361]: E0920 18:38:55.693252    5361 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:38:55 functional-023857 kubelet[5361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:38:55 functional-023857 kubelet[5361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:38:55 functional-023857 kubelet[5361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:38:55 functional-023857 kubelet[5361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:38:55 functional-023857 kubelet[5361]: E0920 18:38:55.800044    5361 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857535799212090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:38:55 functional-023857 kubelet[5361]: E0920 18:38:55.800118    5361 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857535799212090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:39:05 functional-023857 kubelet[5361]: E0920 18:39:05.677841    5361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-p6v98" podUID="1df5da07-a4a6-40cb-85a8-25375612c542"
	Sep 20 18:39:05 functional-023857 kubelet[5361]: E0920 18:39:05.802668    5361 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857545802234992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:39:05 functional-023857 kubelet[5361]: E0920 18:39:05.802738    5361 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857545802234992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:39:15 functional-023857 kubelet[5361]: E0920 18:39:15.806976    5361 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857555806679394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:39:15 functional-023857 kubelet[5361]: E0920 18:39:15.807016    5361 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857555806679394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:39:20 functional-023857 kubelet[5361]: E0920 18:39:20.933259    5361 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 20 18:39:20 functional-023857 kubelet[5361]: E0920 18:39:20.933581    5361 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 20 18:39:20 functional-023857 kubelet[5361]: E0920 18:39:20.933915    5361 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59dqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(2af898fd-7b04-41ed-8c9e-0651f29c22bc): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 20 18:39:20 functional-023857 kubelet[5361]: E0920 18:39:20.936877    5361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2af898fd-7b04-41ed-8c9e-0651f29c22bc"
	Sep 20 18:39:25 functional-023857 kubelet[5361]: E0920 18:39:25.809630    5361 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857565808042973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:39:25 functional-023857 kubelet[5361]: E0920 18:39:25.809673    5361 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857565808042973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:39:32 functional-023857 kubelet[5361]: E0920 18:39:32.676608    5361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="2af898fd-7b04-41ed-8c9e-0651f29c22bc"
	
	
	==> kubernetes-dashboard [98422586941ed5d5be11194f2972f46131b0c8df14c0c534d99ff741cda814ea] <==
	2024/09/20 18:37:42 Using namespace: kubernetes-dashboard
	2024/09/20 18:37:42 Using in-cluster config to connect to apiserver
	2024/09/20 18:37:42 Using secret token for csrf signing
	2024/09/20 18:37:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/20 18:37:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/20 18:37:42 Successful initial request to the apiserver, version: v1.31.1
	2024/09/20 18:37:42 Generating JWE encryption key
	2024/09/20 18:37:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/20 18:37:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/20 18:37:42 Initializing JWE encryption key from synchronized object
	2024/09/20 18:37:42 Creating in-cluster Sidecar client
	2024/09/20 18:37:42 Serving insecurely on HTTP port: 9090
	2024/09/20 18:37:42 Successful request to sidecar
	2024/09/20 18:37:42 Starting overwatch
	
	
	==> storage-provisioner [4886699ad63bba9d38ef13eb8597a566114a9ed695dd0a63c2b15c14676c6ba9] <==
	I0920 18:35:16.348895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:35:16.380208       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:35:16.380259       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:35:33.786576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:35:33.786783       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-023857_4c4a8201-b60b-4e99-b3c0-0e0168c9d5a6!
	I0920 18:35:33.787287       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bc7a283-94d6-45db-a725-a48b7fea6f02", APIVersion:"v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-023857_4c4a8201-b60b-4e99-b3c0-0e0168c9d5a6 became leader
	I0920 18:35:33.887443       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-023857_4c4a8201-b60b-4e99-b3c0-0e0168c9d5a6!
	
	
	==> storage-provisioner [a895e49e3fae6b133f7d6971a77f242750e509c1acf7736347302588367441d4] <==
	I0920 18:35:59.972851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:36:00.027239       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:36:00.027735       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:36:17.438110       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:36:17.438832       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bc7a283-94d6-45db-a725-a48b7fea6f02", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-023857_ab64c720-1759-4961-a184-4cbec2442aad became leader
	I0920 18:36:17.438873       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-023857_ab64c720-1759-4961-a184-4cbec2442aad!
	I0920 18:36:17.540077       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-023857_ab64c720-1759-4961-a184-4cbec2442aad!
	I0920 18:36:29.538438       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0920 18:36:29.546992       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"bf4d0cb7-4337-43e3-8357-543af67d4579", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0920 18:36:29.546558       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    08b137ab-dca1-457e-9462-73164eff8ffa 347 0 2024-09-20 18:34:45 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-20 18:34:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-bf4d0cb7-4337-43e3-8357-543af67d4579 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  bf4d0cb7-4337-43e3-8357-543af67d4579 690 0 2024-09-20 18:36:29 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-20 18:36:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-20 18:36:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0920 18:36:29.559419       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-bf4d0cb7-4337-43e3-8357-543af67d4579" provisioned
	I0920 18:36:29.559471       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0920 18:36:29.559486       1 volume_store.go:212] Trying to save persistentvolume "pvc-bf4d0cb7-4337-43e3-8357-543af67d4579"
	I0920 18:36:29.593396       1 volume_store.go:219] persistentvolume "pvc-bf4d0cb7-4337-43e3-8357-543af67d4579" saved
	I0920 18:36:29.597864       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"bf4d0cb7-4337-43e3-8357-543af67d4579", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-bf4d0cb7-4337-43e3-8357-543af67d4579
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-023857 -n functional-023857
helpers_test.go:261: (dbg) Run:  kubectl --context functional-023857 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-p6v98 sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-023857 describe pod busybox-mount mysql-6cdb49bbb-p6v98 sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-023857 describe pod busybox-mount mysql-6cdb49bbb-p6v98 sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-023857/192.168.39.93
	Start Time:       Fri, 20 Sep 2024 18:36:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  cri-o://c1603e7fc015d5c007156a7ce65b313c1ca3bade746930b1bcef783bce75b237
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 20 Sep 2024 18:37:35 +0000
	      Finished:     Fri, 20 Sep 2024 18:37:35 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tw7s5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tw7s5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m56s  default-scheduler  Successfully assigned default/busybox-mount to functional-023857
	  Normal  Pulling    2m56s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     119s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.169s (57.02s including waiting). Image size: 4631262 bytes.
	  Normal  Created    119s   kubelet            Created container mount-munger
	  Normal  Started    119s   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-p6v98
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-023857/192.168.39.93
	Start Time:       Fri, 20 Sep 2024 18:36:35 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zsfmt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zsfmt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m59s                default-scheduler  Successfully assigned default/mysql-6cdb49bbb-p6v98 to functional-023857
	  Warning  Failed     2m1s                 kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     44s (x2 over 2m1s)   kubelet            Error: ErrImagePull
	  Warning  Failed     44s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    29s (x2 over 2m)     kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     29s (x2 over 2m)     kubelet            Error: ImagePullBackOff
	  Normal   Pulling    17s (x3 over 2m59s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-023857/192.168.39.93
	Start Time:       Fri, 20 Sep 2024 18:36:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59dqs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-59dqs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-023857
	  Warning  Failed     2m31s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    57s (x3 over 3m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     14s (x3 over 2m31s)  kubelet            Error: ErrImagePull
	  Warning  Failed     14s (x2 over 81s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2s (x3 over 2m30s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2s (x3 over 2m30s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
E0920 18:39:55.905485  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:23.607531  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:44:55.904505  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (190.24s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-023857 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-p6v98" [1df5da07-a4a6-40cb-85a8-25375612c542] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1799: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-023857 -n functional-023857
functional_test.go:1799: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2024-09-20 18:46:35.554697681 +0000 UTC m=+2047.400342282
functional_test.go:1799: (dbg) Run:  kubectl --context functional-023857 describe po mysql-6cdb49bbb-p6v98 -n default
functional_test.go:1799: (dbg) kubectl --context functional-023857 describe po mysql-6cdb49bbb-p6v98 -n default:
Name:             mysql-6cdb49bbb-p6v98
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-023857/192.168.39.93
Start Time:       Fri, 20 Sep 2024 18:36:35 +0000
Labels:           app=mysql
pod-template-hash=6cdb49bbb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-6cdb49bbb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zsfmt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zsfmt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-6cdb49bbb-p6v98 to functional-023857
Warning  Failed     7m45s                 kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    5m54s (x4 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     5m4s (x3 over 9m2s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     5m4s (x4 over 9m2s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m37s (x7 over 9m1s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     4m37s (x7 over 9m1s)  kubelet            Error: ImagePullBackOff
functional_test.go:1799: (dbg) Run:  kubectl --context functional-023857 logs mysql-6cdb49bbb-p6v98 -n default
functional_test.go:1799: (dbg) Non-zero exit: kubectl --context functional-023857 logs mysql-6cdb49bbb-p6v98 -n default: exit status 1 (70.122934ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-6cdb49bbb-p6v98" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1799: kubectl --context functional-023857 logs mysql-6cdb49bbb-p6v98 -n default: exit status 1
functional_test.go:1801: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-023857 -n functional-023857
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-023857 logs -n 25: (1.452072134s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-023857 ssh stat                                               | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh sudo                                               | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-023857                                                     | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port2890524020/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh findmnt                                            | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh findmnt                                            | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh -- ls                                              | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh sudo                                               | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-023857                                                     | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2424216336/001:/mount1   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-023857                                                     | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2424216336/001:/mount2   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-023857                                                     | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2424216336/001:/mount3   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh findmnt                                            | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh findmnt                                            | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh findmnt                                            | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh findmnt                                            | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-023857                                                     | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| image          | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-023857 ssh pgrep                                              | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-023857 image build -t                                         | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | localhost/my-image:functional-023857                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-023857 image ls                                               | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	| image          | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| update-context | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-023857                                                        | functional-023857 | jenkins | v1.34.0 | 20 Sep 24 18:37 UTC | 20 Sep 24 18:37 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:36:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:36:37.808076  759778 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:36:37.808339  759778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:37.808349  759778 out.go:358] Setting ErrFile to fd 2...
	I0920 18:36:37.808354  759778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:37.808664  759778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:36:37.809227  759778 out.go:352] Setting JSON to false
	I0920 18:36:37.810275  759778 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8348,"bootTime":1726849050,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:36:37.810374  759778 start.go:139] virtualization: kvm guest
	I0920 18:36:37.812436  759778 out.go:177] * [functional-023857] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0920 18:36:37.813834  759778 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:36:37.813841  759778 notify.go:220] Checking for updates...
	I0920 18:36:37.816359  759778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:36:37.817759  759778 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:36:37.819129  759778 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:36:37.820349  759778 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:36:37.821722  759778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:36:37.823321  759778 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:36:37.823743  759778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:36:37.823814  759778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:36:37.839986  759778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0920 18:36:37.840459  759778 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:36:37.841068  759778 main.go:141] libmachine: Using API Version  1
	I0920 18:36:37.841109  759778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:36:37.841456  759778 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:36:37.841679  759778 main.go:141] libmachine: (functional-023857) Calling .DriverName
	I0920 18:36:37.841937  759778 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:36:37.842235  759778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:36:37.842275  759778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:36:37.857372  759778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I0920 18:36:37.857848  759778 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:36:37.858408  759778 main.go:141] libmachine: Using API Version  1
	I0920 18:36:37.858449  759778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:36:37.858816  759778 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:36:37.859004  759778 main.go:141] libmachine: (functional-023857) Calling .DriverName
	I0920 18:36:37.890577  759778 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0920 18:36:37.891878  759778 start.go:297] selected driver: kvm2
	I0920 18:36:37.891907  759778 start.go:901] validating driver "kvm2" against &{Name:functional-023857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-023857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.93 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:36:37.892031  759778 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:36:37.894176  759778 out.go:201] 
	W0920 18:36:37.895327  759778 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 18:36:37.896536  759778 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.360906288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857996360882740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bc99a77-527c-44f2-8b01-0655364f56ed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.361437056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06c9f45d-fd3c-4164-ac2a-f720abff68ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.361514571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06c9f45d-fd3c-4164-ac2a-f720abff68ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.361841998Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98422586941ed5d5be11194f2972f46131b0c8df14c0c534d99ff741cda814ea,PodSandboxId:6086c556deeae280b86a459185f22aa485cfd26959f3adaa1703268446f8680f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726857462581828344,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-xfwwx,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7932711d-4564-4f20-8d18-fe63fd4af7a0,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc46c85979e72ea8eac5af4dfce9b3e03cfc30ee8fe3df005df4eeb431ec96,PodSandboxId:d5820c867eee6bfa759eeda9938e2365b3e482575a22c94df71eb6c3d7d04daa,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1726857457302862821,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-xllhd,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 930cbd4b-8c12-4d96-aa75-d9df7152e4a4,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1603e7fc015d5c007156a7ce65b313c1ca3bade746930b1bcef783bce75b237,PodSandboxId:c4544c07b93ad383d5122b487fdf5d81875899dcd790600e02cc1a5f2773d818,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1726857455176584446,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f272a016-0500-4c42-a245-a79b4aa77359,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7d3f0d5a255544a69ade0d2d4cf74add4317166d9ea7f0331853d61caf151e,PodSandboxId:52adb96704963fcdc89396464d5ecffdb029c4b5a67cf577a46b6352e7ab947b,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388208814640,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-hj76w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 358194f5-7d44-4bf1-9c90-a57f0079a0a3,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e05ec8ac967682862ac9b95c2d52fa754973ce6155a390d10efa87a475b4ec4,PodSandboxId:db5013d2a53b9a757db634861a4f6f5521d78976660ba9a62b685af9a03a24b6,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388128125182,Labels:map[string
]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-7rbf2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7c14f12-78fa-492c-a893-12ea14bbaa08,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073528f9db5cd85f0481e1394beff20af69853f1cfa180605dbd126f7498eb80,PodSandboxId:8089343c80145c45214f6649c9382dfa169a417612275f62079ef83750ee0b74,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857360230513309,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3142ee0cbf3160e4c60db766d0945a4b551e4fdc0e40cb15d0a6f0365a6272b,PodSandboxId:1d3b6dc1d00387ed949d376477e6f4a581da257175320ec78c616b64b66bc95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857356286081419,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292decab062e09b859f1ae460211fd66,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9f0252f7f81f538befe30b3e4415d2a20d2fe40b91c97042907719dae6b9bf,PodSandboxId:02cec729ef1233f6ac1c29d605b339f4c31e41574034614489e745517f8cd0cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857356110599367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aefe896edb12efcaef0b550d11d4d87828e02076446a338c5c0a89c582fefd9,PodSandboxId:a7148b6c1de9c58a6379d7d3b0adac8e64f9af61e28179cba2a697fdb395f013,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857353907318977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a895e49e3fae6b133f7d6971a77f242750e509c1acf7736347302588367441d4,PodSandboxId:b3fdb8eae03bfd3e99258f6c361e224ac86f6b43534c4bf5a473c3dae5e3d426,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857353973066465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830a9e361bd83ba6fb475f6c0ea298beea967a4a82d79c6789d50383ddee292c,PodSandboxId:07999a46ed097bcef6278d13465951b8e7555fc0e5a6b40b2269155d1c301284,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857353959384074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fc73baf20f3be1d1149f23bf9ae1ab71652015c84212a13727924f856adc73,PodSandboxId:50100e03d3e9596d30816dd08ca880de56ab43f4d843f2af5803f5d671537952,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca
5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857353854905932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4e5a788dc92fc0c871582d643d9a32e3397862a66072436d9e059c810cc4a9,PodSandboxId:2f9a5fddff00d0fbf043e38455556fb698c2754aff84a3ae392dd87f4d5cb80a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State
:CONTAINER_EXITED,CreatedAt:1726857316346831089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de751dc72438df7c2e75ead88ead0ddcf00fa24dece094bfb9b3663fd5324641,PodSandboxId:b9398e2cc0bd5dfd5082c0ba17301e83dc83138e67e28d2323f9772495d423cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d
131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726857316321593818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886699ad63bba9d38ef13eb8597a566114a9ed695dd0a63c2b15c14676c6ba9,PodSandboxId:1a186182398e96144d56e47f5266152b92901106c457ca6f9928c860e995a8fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d86
7d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726857310383548384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29bad7e051fededdb23e1af642018dcf912edffbdbd38499fe84826cf6dae7d9,PodSandboxId:630c1a13b7bbb3c357c023af1c958f9094731aa73e9b3eac1398766107c2bc36,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726857310291259120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d92ff3168d16da22d6afd2037de4213d705f7e7f3e8b757785a2789640e33b5,PodSandboxId:fc953c229a27103bc550e0e93e63df725cc792bcba7c5af415d1c5c55cb75016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726857310232778586,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9dc2db11ced2fc4130cfa386ec557a5e2809bc92c435a2ab2ca9ecdcf610fd2,PodSandboxId:bd5db08ef37cff584d43c66d5a73c8496b41a19cd04eab773eed6912311c843b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726857310261104883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06c9f45d-fd3c-4164-ac2a-f720abff68ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.405103329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e45943ec-c60e-48b3-984a-82e870853097 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.405180191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e45943ec-c60e-48b3-984a-82e870853097 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.406259177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0a02129-bd69-4dc7-b470-9c1bff0ba73b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.407147061Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857996407123936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0a02129-bd69-4dc7-b470-9c1bff0ba73b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.407805261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e7fe088-44b7-437f-bc4f-1b7f270a4268 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.407889658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e7fe088-44b7-437f-bc4f-1b7f270a4268 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.408348962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98422586941ed5d5be11194f2972f46131b0c8df14c0c534d99ff741cda814ea,PodSandboxId:6086c556deeae280b86a459185f22aa485cfd26959f3adaa1703268446f8680f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726857462581828344,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-xfwwx,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7932711d-4564-4f20-8d18-fe63fd4af7a0,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc46c85979e72ea8eac5af4dfce9b3e03cfc30ee8fe3df005df4eeb431ec96,PodSandboxId:d5820c867eee6bfa759eeda9938e2365b3e482575a22c94df71eb6c3d7d04daa,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1726857457302862821,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-xllhd,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 930cbd4b-8c12-4d96-aa75-d9df7152e4a4,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1603e7fc015d5c007156a7ce65b313c1ca3bade746930b1bcef783bce75b237,PodSandboxId:c4544c07b93ad383d5122b487fdf5d81875899dcd790600e02cc1a5f2773d818,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1726857455176584446,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f272a016-0500-4c42-a245-a79b4aa77359,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7d3f0d5a255544a69ade0d2d4cf74add4317166d9ea7f0331853d61caf151e,PodSandboxId:52adb96704963fcdc89396464d5ecffdb029c4b5a67cf577a46b6352e7ab947b,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388208814640,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-hj76w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 358194f5-7d44-4bf1-9c90-a57f0079a0a3,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e05ec8ac967682862ac9b95c2d52fa754973ce6155a390d10efa87a475b4ec4,PodSandboxId:db5013d2a53b9a757db634861a4f6f5521d78976660ba9a62b685af9a03a24b6,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388128125182,Labels:map[string
]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-7rbf2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7c14f12-78fa-492c-a893-12ea14bbaa08,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073528f9db5cd85f0481e1394beff20af69853f1cfa180605dbd126f7498eb80,PodSandboxId:8089343c80145c45214f6649c9382dfa169a417612275f62079ef83750ee0b74,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857360230513309,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3142ee0cbf3160e4c60db766d0945a4b551e4fdc0e40cb15d0a6f0365a6272b,PodSandboxId:1d3b6dc1d00387ed949d376477e6f4a581da257175320ec78c616b64b66bc95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857356286081419,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292decab062e09b859f1ae460211fd66,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9f0252f7f81f538befe30b3e4415d2a20d2fe40b91c97042907719dae6b9bf,PodSandboxId:02cec729ef1233f6ac1c29d605b339f4c31e41574034614489e745517f8cd0cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857356110599367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aefe896edb12efcaef0b550d11d4d87828e02076446a338c5c0a89c582fefd9,PodSandboxId:a7148b6c1de9c58a6379d7d3b0adac8e64f9af61e28179cba2a697fdb395f013,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857353907318977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a895e49e3fae6b133f7d6971a77f242750e509c1acf7736347302588367441d4,PodSandboxId:b3fdb8eae03bfd3e99258f6c361e224ac86f6b43534c4bf5a473c3dae5e3d426,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857353973066465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830a9e361bd83ba6fb475f6c0ea298beea967a4a82d79c6789d50383ddee292c,PodSandboxId:07999a46ed097bcef6278d13465951b8e7555fc0e5a6b40b2269155d1c301284,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857353959384074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fc73baf20f3be1d1149f23bf9ae1ab71652015c84212a13727924f856adc73,PodSandboxId:50100e03d3e9596d30816dd08ca880de56ab43f4d843f2af5803f5d671537952,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca
5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857353854905932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4e5a788dc92fc0c871582d643d9a32e3397862a66072436d9e059c810cc4a9,PodSandboxId:2f9a5fddff00d0fbf043e38455556fb698c2754aff84a3ae392dd87f4d5cb80a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State
:CONTAINER_EXITED,CreatedAt:1726857316346831089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de751dc72438df7c2e75ead88ead0ddcf00fa24dece094bfb9b3663fd5324641,PodSandboxId:b9398e2cc0bd5dfd5082c0ba17301e83dc83138e67e28d2323f9772495d423cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d
131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726857316321593818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886699ad63bba9d38ef13eb8597a566114a9ed695dd0a63c2b15c14676c6ba9,PodSandboxId:1a186182398e96144d56e47f5266152b92901106c457ca6f9928c860e995a8fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d86
7d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726857310383548384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29bad7e051fededdb23e1af642018dcf912edffbdbd38499fe84826cf6dae7d9,PodSandboxId:630c1a13b7bbb3c357c023af1c958f9094731aa73e9b3eac1398766107c2bc36,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726857310291259120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d92ff3168d16da22d6afd2037de4213d705f7e7f3e8b757785a2789640e33b5,PodSandboxId:fc953c229a27103bc550e0e93e63df725cc792bcba7c5af415d1c5c55cb75016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726857310232778586,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9dc2db11ced2fc4130cfa386ec557a5e2809bc92c435a2ab2ca9ecdcf610fd2,PodSandboxId:bd5db08ef37cff584d43c66d5a73c8496b41a19cd04eab773eed6912311c843b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726857310261104883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e7fe088-44b7-437f-bc4f-1b7f270a4268 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.443574639Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d549c31-4dc6-4a59-aa21-9c0b6bc03cbd name=/runtime.v1.RuntimeService/Version
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.443675181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d549c31-4dc6-4a59-aa21-9c0b6bc03cbd name=/runtime.v1.RuntimeService/Version
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.445013003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=102cb6ad-66d3-4575-8335-932b9dff62f8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.445836511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857996445811387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=102cb6ad-66d3-4575-8335-932b9dff62f8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.446680468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35b7b0b9-4c98-4edd-b3e7-c9b463918c0f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.446757024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35b7b0b9-4c98-4edd-b3e7-c9b463918c0f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.447566306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98422586941ed5d5be11194f2972f46131b0c8df14c0c534d99ff741cda814ea,PodSandboxId:6086c556deeae280b86a459185f22aa485cfd26959f3adaa1703268446f8680f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726857462581828344,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-xfwwx,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7932711d-4564-4f20-8d18-fe63fd4af7a0,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc46c85979e72ea8eac5af4dfce9b3e03cfc30ee8fe3df005df4eeb431ec96,PodSandboxId:d5820c867eee6bfa759eeda9938e2365b3e482575a22c94df71eb6c3d7d04daa,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1726857457302862821,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-xllhd,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 930cbd4b-8c12-4d96-aa75-d9df7152e4a4,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1603e7fc015d5c007156a7ce65b313c1ca3bade746930b1bcef783bce75b237,PodSandboxId:c4544c07b93ad383d5122b487fdf5d81875899dcd790600e02cc1a5f2773d818,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1726857455176584446,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f272a016-0500-4c42-a245-a79b4aa77359,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7d3f0d5a255544a69ade0d2d4cf74add4317166d9ea7f0331853d61caf151e,PodSandboxId:52adb96704963fcdc89396464d5ecffdb029c4b5a67cf577a46b6352e7ab947b,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388208814640,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-hj76w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 358194f5-7d44-4bf1-9c90-a57f0079a0a3,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e05ec8ac967682862ac9b95c2d52fa754973ce6155a390d10efa87a475b4ec4,PodSandboxId:db5013d2a53b9a757db634861a4f6f5521d78976660ba9a62b685af9a03a24b6,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388128125182,Labels:map[string
]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-7rbf2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7c14f12-78fa-492c-a893-12ea14bbaa08,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073528f9db5cd85f0481e1394beff20af69853f1cfa180605dbd126f7498eb80,PodSandboxId:8089343c80145c45214f6649c9382dfa169a417612275f62079ef83750ee0b74,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857360230513309,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3142ee0cbf3160e4c60db766d0945a4b551e4fdc0e40cb15d0a6f0365a6272b,PodSandboxId:1d3b6dc1d00387ed949d376477e6f4a581da257175320ec78c616b64b66bc95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857356286081419,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292decab062e09b859f1ae460211fd66,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9f0252f7f81f538befe30b3e4415d2a20d2fe40b91c97042907719dae6b9bf,PodSandboxId:02cec729ef1233f6ac1c29d605b339f4c31e41574034614489e745517f8cd0cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857356110599367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aefe896edb12efcaef0b550d11d4d87828e02076446a338c5c0a89c582fefd9,PodSandboxId:a7148b6c1de9c58a6379d7d3b0adac8e64f9af61e28179cba2a697fdb395f013,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857353907318977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a895e49e3fae6b133f7d6971a77f242750e509c1acf7736347302588367441d4,PodSandboxId:b3fdb8eae03bfd3e99258f6c361e224ac86f6b43534c4bf5a473c3dae5e3d426,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857353973066465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830a9e361bd83ba6fb475f6c0ea298beea967a4a82d79c6789d50383ddee292c,PodSandboxId:07999a46ed097bcef6278d13465951b8e7555fc0e5a6b40b2269155d1c301284,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857353959384074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fc73baf20f3be1d1149f23bf9ae1ab71652015c84212a13727924f856adc73,PodSandboxId:50100e03d3e9596d30816dd08ca880de56ab43f4d843f2af5803f5d671537952,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca
5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857353854905932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4e5a788dc92fc0c871582d643d9a32e3397862a66072436d9e059c810cc4a9,PodSandboxId:2f9a5fddff00d0fbf043e38455556fb698c2754aff84a3ae392dd87f4d5cb80a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State
:CONTAINER_EXITED,CreatedAt:1726857316346831089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de751dc72438df7c2e75ead88ead0ddcf00fa24dece094bfb9b3663fd5324641,PodSandboxId:b9398e2cc0bd5dfd5082c0ba17301e83dc83138e67e28d2323f9772495d423cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d
131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726857316321593818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886699ad63bba9d38ef13eb8597a566114a9ed695dd0a63c2b15c14676c6ba9,PodSandboxId:1a186182398e96144d56e47f5266152b92901106c457ca6f9928c860e995a8fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d86
7d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726857310383548384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29bad7e051fededdb23e1af642018dcf912edffbdbd38499fe84826cf6dae7d9,PodSandboxId:630c1a13b7bbb3c357c023af1c958f9094731aa73e9b3eac1398766107c2bc36,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726857310291259120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d92ff3168d16da22d6afd2037de4213d705f7e7f3e8b757785a2789640e33b5,PodSandboxId:fc953c229a27103bc550e0e93e63df725cc792bcba7c5af415d1c5c55cb75016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726857310232778586,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9dc2db11ced2fc4130cfa386ec557a5e2809bc92c435a2ab2ca9ecdcf610fd2,PodSandboxId:bd5db08ef37cff584d43c66d5a73c8496b41a19cd04eab773eed6912311c843b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726857310261104883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35b7b0b9-4c98-4edd-b3e7-c9b463918c0f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.485192475Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9252ec79-aa35-47e9-9615-e166b1008021 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.485266311Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9252ec79-aa35-47e9-9615-e166b1008021 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.486449260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=724f1665-414f-4e80-b541-a240eb2c01b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.487200096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857996487176353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=724f1665-414f-4e80-b541-a240eb2c01b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.487717423Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21453a3f-3276-4ec6-b93f-c527c0b59dd0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.487785514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21453a3f-3276-4ec6-b93f-c527c0b59dd0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:46:36 functional-023857 crio[4715]: time="2024-09-20 18:46:36.488222655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98422586941ed5d5be11194f2972f46131b0c8df14c0c534d99ff741cda814ea,PodSandboxId:6086c556deeae280b86a459185f22aa485cfd26959f3adaa1703268446f8680f,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1726857462581828344,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-xfwwx,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7932711d-4564-4f20-8d18-fe63fd4af7a0,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc46c85979e72ea8eac5af4dfce9b3e03cfc30ee8fe3df005df4eeb431ec96,PodSandboxId:d5820c867eee6bfa759eeda9938e2365b3e482575a22c94df71eb6c3d7d04daa,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1726857457302862821,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-xllhd,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 930cbd4b-8c12-4d96-aa75-d9df7152e4a4,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1603e7fc015d5c007156a7ce65b313c1ca3bade746930b1bcef783bce75b237,PodSandboxId:c4544c07b93ad383d5122b487fdf5d81875899dcd790600e02cc1a5f2773d818,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1726857455176584446,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f272a016-0500-4c42-a245-a79b4aa77359,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7d3f0d5a255544a69ade0d2d4cf74add4317166d9ea7f0331853d61caf151e,PodSandboxId:52adb96704963fcdc89396464d5ecffdb029c4b5a67cf577a46b6352e7ab947b,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388208814640,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-hj76w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 358194f5-7d44-4bf1-9c90-a57f0079a0a3,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e05ec8ac967682862ac9b95c2d52fa754973ce6155a390d10efa87a475b4ec4,PodSandboxId:db5013d2a53b9a757db634861a4f6f5521d78976660ba9a62b685af9a03a24b6,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1726857388128125182,Labels:map[string
]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-7rbf2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d7c14f12-78fa-492c-a893-12ea14bbaa08,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073528f9db5cd85f0481e1394beff20af69853f1cfa180605dbd126f7498eb80,PodSandboxId:8089343c80145c45214f6649c9382dfa169a417612275f62079ef83750ee0b74,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726857360230513309,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3142ee0cbf3160e4c60db766d0945a4b551e4fdc0e40cb15d0a6f0365a6272b,PodSandboxId:1d3b6dc1d00387ed949d376477e6f4a581da257175320ec78c616b64b66bc95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726857356286081419,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292decab062e09b859f1ae460211fd66,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de9f0252f7f81f538befe30b3e4415d2a20d2fe40b91c97042907719dae6b9bf,PodSandboxId:02cec729ef1233f6ac1c29d605b339f4c31e41574034614489e745517f8cd0cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726857356110599367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aefe896edb12efcaef0b550d11d4d87828e02076446a338c5c0a89c582fefd9,PodSandboxId:a7148b6c1de9c58a6379d7d3b0adac8e64f9af61e28179cba2a697fdb395f013,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726857353907318977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a895e49e3fae6b133f7d6971a77f242750e509c1acf7736347302588367441d4,PodSandboxId:b3fdb8eae03bfd3e99258f6c361e224ac86f6b43534c4bf5a473c3dae5e3d426,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726857353973066465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:830a9e361bd83ba6fb475f6c0ea298beea967a4a82d79c6789d50383ddee292c,PodSandboxId:07999a46ed097bcef6278d13465951b8e7555fc0e5a6b40b2269155d1c301284,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726857353959384074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fc73baf20f3be1d1149f23bf9ae1ab71652015c84212a13727924f856adc73,PodSandboxId:50100e03d3e9596d30816dd08ca880de56ab43f4d843f2af5803f5d671537952,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca
5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726857353854905932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4e5a788dc92fc0c871582d643d9a32e3397862a66072436d9e059c810cc4a9,PodSandboxId:2f9a5fddff00d0fbf043e38455556fb698c2754aff84a3ae392dd87f4d5cb80a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State
:CONTAINER_EXITED,CreatedAt:1726857316346831089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v2dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66614af-8858-4f76-8b94-2580ac2fc019,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de751dc72438df7c2e75ead88ead0ddcf00fa24dece094bfb9b3663fd5324641,PodSandboxId:b9398e2cc0bd5dfd5082c0ba17301e83dc83138e67e28d2323f9772495d423cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d
131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726857316321593818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6hl6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 440a2827-7194-4721-b1e0-c356fc6be3af,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886699ad63bba9d38ef13eb8597a566114a9ed695dd0a63c2b15c14676c6ba9,PodSandboxId:1a186182398e96144d56e47f5266152b92901106c457ca6f9928c860e995a8fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d86
7d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726857310383548384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d9275-483c-414e-841c-7b0f97612610,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29bad7e051fededdb23e1af642018dcf912edffbdbd38499fe84826cf6dae7d9,PodSandboxId:630c1a13b7bbb3c357c023af1c958f9094731aa73e9b3eac1398766107c2bc36,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726857310291259120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109be7ba378e41e3d3f543e5ce2b30a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d92ff3168d16da22d6afd2037de4213d705f7e7f3e8b757785a2789640e33b5,PodSandboxId:fc953c229a27103bc550e0e93e63df725cc792bcba7c5af415d1c5c55cb75016,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726857310232778586,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab59d3304493fda85f8b2ed4f1a23949,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9dc2db11ced2fc4130cfa386ec557a5e2809bc92c435a2ab2ca9ecdcf610fd2,PodSandboxId:bd5db08ef37cff584d43c66d5a73c8496b41a19cd04eab773eed6912311c843b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726857310261104883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023857,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc86c4f5a4863fed0e816b217a7bd063,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21453a3f-3276-4ec6-b93f-c527c0b59dd0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	98422586941ed       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         8 minutes ago       Running             kubernetes-dashboard        0                   6086c556deeae       kubernetes-dashboard-695b96c756-xfwwx
	b6cc46c85979e       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   8 minutes ago       Running             dashboard-metrics-scraper   0                   d5820c867eee6       dashboard-metrics-scraper-c5db448b4-xllhd
	c1603e7fc015d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              9 minutes ago       Exited              mount-munger                0                   c4544c07b93ad       busybox-mount
	2b7d3f0d5a255       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   52adb96704963       hello-node-connect-67bdd5bbb4-hj76w
	3e05ec8ac9676       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   db5013d2a53b9       hello-node-6b9f76b5c7-7rbf2
	073528f9db5cd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     2                   8089343c80145       coredns-7c65d6cfc9-v2dmd
	c3142ee0cbf31       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 10 minutes ago      Running             kube-apiserver              0                   1d3b6dc1d0038       kube-apiserver-functional-023857
	de9f0252f7f81       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 10 minutes ago      Running             kube-controller-manager     2                   02cec729ef123       kube-controller-manager-functional-023857
	a895e49e3fae6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   b3fdb8eae03bf       storage-provisioner
	830a9e361bd83       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 10 minutes ago      Running             kube-proxy                  2                   07999a46ed097       kube-proxy-k6hl6
	3aefe896edb12       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 10 minutes ago      Running             kube-scheduler              2                   a7148b6c1de9c       kube-scheduler-functional-023857
	14fc73baf20f3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 10 minutes ago      Running             etcd                        2                   50100e03d3e95       etcd-functional-023857
	0c4e5a788dc92       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 11 minutes ago      Exited              coredns                     1                   2f9a5fddff00d       coredns-7c65d6cfc9-v2dmd
	de751dc72438d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 11 minutes ago      Exited              kube-proxy                  1                   b9398e2cc0bd5       kube-proxy-k6hl6
	4886699ad63bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   1a186182398e9       storage-provisioner
	29bad7e051fed       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 11 minutes ago      Exited              etcd                        1                   630c1a13b7bbb       etcd-functional-023857
	e9dc2db11ced2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 11 minutes ago      Exited              kube-controller-manager     1                   bd5db08ef37cf       kube-controller-manager-functional-023857
	8d92ff3168d16       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 11 minutes ago      Exited              kube-scheduler              1                   fc953c229a271       kube-scheduler-functional-023857
	
	
	==> coredns [073528f9db5cd85f0481e1394beff20af69853f1cfa180605dbd126f7498eb80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47151 - 32373 "HINFO IN 7303210622316958186.61452939891773212. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.029454658s
	
	
	==> coredns [0c4e5a788dc92fc0c871582d643d9a32e3397862a66072436d9e059c810cc4a9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53602 - 44332 "HINFO IN 1581926696694530428.1537307737197261760. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033211115s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-023857
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-023857
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=functional-023857
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_34_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:34:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-023857
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:46:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:43:07 +0000   Fri, 20 Sep 2024 18:34:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:43:07 +0000   Fri, 20 Sep 2024 18:34:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:43:07 +0000   Fri, 20 Sep 2024 18:34:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:43:07 +0000   Fri, 20 Sep 2024 18:34:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.93
	  Hostname:    functional-023857
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 17227d3d682e46e29ecf58fda0a4b07a
	  System UUID:                17227d3d-682e-46e2-9ecf-58fda0a4b07a
	  Boot ID:                    a2770fe1-7030-4dea-b57f-b423181af6b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-7rbf2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-67bdd5bbb4-hj76w          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-6cdb49bbb-p6v98                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-v2dmd                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-functional-023857                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-functional-023857             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-023857    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-k6hl6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-023857             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-xllhd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-xfwwx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-023857 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-023857 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-023857 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeReady                11m                kubelet          Node functional-023857 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node functional-023857 event: Registered Node functional-023857 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-023857 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-023857 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-023857 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-023857 event: Registered Node functional-023857 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-023857 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-023857 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-023857 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-023857 event: Registered Node functional-023857 in Controller
	
	
	==> dmesg <==
	[  +0.265431] systemd-fstab-generator[2430]: Ignoring "noauto" option for root device
	[Sep20 18:35] systemd-fstab-generator[2550]: Ignoring "noauto" option for root device
	[  +0.072754] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.497558] systemd-fstab-generator[3096]: Ignoring "noauto" option for root device
	[  +4.611767] kauditd_printk_skb: 107 callbacks suppressed
	[ +12.672607] systemd-fstab-generator[3439]: Ignoring "noauto" option for root device
	[  +0.095053] kauditd_printk_skb: 6 callbacks suppressed
	[ +17.198695] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.749016] systemd-fstab-generator[4642]: Ignoring "noauto" option for root device
	[  +0.134127] systemd-fstab-generator[4654]: Ignoring "noauto" option for root device
	[  +0.150564] systemd-fstab-generator[4668]: Ignoring "noauto" option for root device
	[  +0.133078] systemd-fstab-generator[4680]: Ignoring "noauto" option for root device
	[  +0.255245] systemd-fstab-generator[4708]: Ignoring "noauto" option for root device
	[  +5.025928] systemd-fstab-generator[4906]: Ignoring "noauto" option for root device
	[  +0.078378] kauditd_printk_skb: 155 callbacks suppressed
	[  +2.484663] systemd-fstab-generator[5354]: Ignoring "noauto" option for root device
	[  +4.324636] kauditd_printk_skb: 122 callbacks suppressed
	[Sep20 18:36] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.075876] systemd-fstab-generator[5944]: Ignoring "noauto" option for root device
	[  +6.321716] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.606645] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.636728] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.656846] kauditd_printk_skb: 18 callbacks suppressed
	[ +17.954536] kauditd_printk_skb: 32 callbacks suppressed
	[Sep20 18:37] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [14fc73baf20f3be1d1149f23bf9ae1ab71652015c84212a13727924f856adc73] <==
	{"level":"info","ts":"2024-09-20T18:35:57.550501Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:35:57.551622Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:35:57.551637Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:35:57.552560Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.93:2379"}
	{"level":"info","ts":"2024-09-20T18:35:57.552647Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-20T18:36:29.342522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.026303ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:36:29.342618Z","caller":"traceutil/trace.go:171","msg":"trace[1091189570] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:687; }","duration":"172.174449ms","start":"2024-09-20T18:36:29.170429Z","end":"2024-09-20T18:36:29.342604Z","steps":["trace[1091189570] 'range keys from in-memory index tree'  (duration: 171.94548ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:36:29.342737Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.520179ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:36:29.342776Z","caller":"traceutil/trace.go:171","msg":"trace[1254777371] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:687; }","duration":"160.559986ms","start":"2024-09-20T18:36:29.182207Z","end":"2024-09-20T18:36:29.342767Z","steps":["trace[1254777371] 'range keys from in-memory index tree'  (duration: 160.47841ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:36:29.342904Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.333615ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:36:29.342985Z","caller":"traceutil/trace.go:171","msg":"trace[1398342944] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:687; }","duration":"133.415561ms","start":"2024-09-20T18:36:29.209563Z","end":"2024-09-20T18:36:29.342979Z","steps":["trace[1398342944] 'range keys from in-memory index tree'  (duration: 133.32787ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:37:42.389690Z","caller":"traceutil/trace.go:171","msg":"trace[1241023795] linearizableReadLoop","detail":"{readStateIndex:948; appliedIndex:947; }","duration":"429.296133ms","start":"2024-09-20T18:37:41.960367Z","end":"2024-09-20T18:37:42.389664Z","steps":["trace[1241023795] 'read index received'  (duration: 429.178833ms)","trace[1241023795] 'applied index is now lower than readState.Index'  (duration: 116.833µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:37:42.389871Z","caller":"traceutil/trace.go:171","msg":"trace[217195296] transaction","detail":"{read_only:false; response_revision:874; number_of_response:1; }","duration":"432.277027ms","start":"2024-09-20T18:37:41.957582Z","end":"2024-09-20T18:37:42.389859Z","steps":["trace[217195296] 'process raft request'  (duration: 431.972745ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:37:42.390118Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.298342ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:37:42.390159Z","caller":"traceutil/trace.go:171","msg":"trace[1456665782] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:874; }","duration":"180.362751ms","start":"2024-09-20T18:37:42.209788Z","end":"2024-09-20T18:37:42.390150Z","steps":["trace[1456665782] 'agreement among raft nodes before linearized reading'  (duration: 180.280373ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:37:42.390283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"429.909832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:37:42.390299Z","caller":"traceutil/trace.go:171","msg":"trace[72418354] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:874; }","duration":"429.929923ms","start":"2024-09-20T18:37:41.960364Z","end":"2024-09-20T18:37:42.390294Z","steps":["trace[72418354] 'agreement among raft nodes before linearized reading'  (duration: 429.895776ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:37:42.390321Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:37:41.960340Z","time spent":"429.97324ms","remote":"127.0.0.1:56810","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-20T18:37:42.390441Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:37:41.957568Z","time spent":"432.323162ms","remote":"127.0.0.1:56800","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:870 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-20T18:37:42.390579Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.842582ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:37:42.390596Z","caller":"traceutil/trace.go:171","msg":"trace[2003012570] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:874; }","duration":"104.861024ms","start":"2024-09-20T18:37:42.285730Z","end":"2024-09-20T18:37:42.390591Z","steps":["trace[2003012570] 'agreement among raft nodes before linearized reading'  (duration: 104.830889ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:38:12.788436Z","caller":"traceutil/trace.go:171","msg":"trace[534785189] transaction","detail":"{read_only:false; response_revision:911; number_of_response:1; }","duration":"247.973979ms","start":"2024-09-20T18:38:12.540439Z","end":"2024-09-20T18:38:12.788413Z","steps":["trace[534785189] 'process raft request'  (duration: 247.876705ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:45:57.579039Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1077}
	{"level":"info","ts":"2024-09-20T18:45:57.604764Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1077,"took":"25.274773ms","hash":819943878,"current-db-size-bytes":3813376,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1503232,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-09-20T18:45:57.604861Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":819943878,"revision":1077,"compact-revision":-1}
	
	
	==> etcd [29bad7e051fededdb23e1af642018dcf912edffbdbd38499fe84826cf6dae7d9] <==
	{"level":"info","ts":"2024-09-20T18:35:14.571828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74d5eee0c2cff883 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:35:14.571866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74d5eee0c2cff883 received MsgPreVoteResp from 74d5eee0c2cff883 at term 2"}
	{"level":"info","ts":"2024-09-20T18:35:14.571884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74d5eee0c2cff883 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T18:35:14.571890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74d5eee0c2cff883 received MsgVoteResp from 74d5eee0c2cff883 at term 3"}
	{"level":"info","ts":"2024-09-20T18:35:14.571898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74d5eee0c2cff883 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T18:35:14.571905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 74d5eee0c2cff883 elected leader 74d5eee0c2cff883 at term 3"}
	{"level":"info","ts":"2024-09-20T18:35:14.577349Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"74d5eee0c2cff883","local-member-attributes":"{Name:functional-023857 ClientURLs:[https://192.168.39.93:2379]}","request-path":"/0/members/74d5eee0c2cff883/attributes","cluster-id":"72f745f8ab51fb0b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:35:14.577393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:35:14.577597Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:35:14.577664Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:35:14.577670Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:35:14.578269Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:35:14.578540Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:35:14.579111Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:35:14.579404Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.93:2379"}
	{"level":"info","ts":"2024-09-20T18:35:40.702414Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T18:35:40.702474Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-023857","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.93:2380"],"advertise-client-urls":["https://192.168.39.93:2379"]}
	{"level":"warn","ts":"2024-09-20T18:35:40.702538Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:35:40.702626Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:35:40.804202Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.93:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:35:40.804259Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.93:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:35:40.804342Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"74d5eee0c2cff883","current-leader-member-id":"74d5eee0c2cff883"}
	{"level":"info","ts":"2024-09-20T18:35:40.808073Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.93:2380"}
	{"level":"info","ts":"2024-09-20T18:35:40.808358Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.93:2380"}
	{"level":"info","ts":"2024-09-20T18:35:40.808394Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-023857","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.93:2380"],"advertise-client-urls":["https://192.168.39.93:2379"]}
	
	
	==> kernel <==
	 18:46:36 up 12 min,  0 users,  load average: 0.07, 0.17, 0.14
	Linux functional-023857 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c3142ee0cbf3160e4c60db766d0945a4b551e4fdc0e40cb15d0a6f0365a6272b] <==
	I0920 18:35:58.919457       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:35:58.919474       1 policy_source.go:224] refreshing policies
	I0920 18:35:58.923856       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:35:58.925200       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:35:58.925283       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:35:58.925598       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:35:58.925643       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:35:58.929990       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 18:35:59.016546       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:35:59.746353       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 18:36:00.435409       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:36:00.451144       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:36:00.489630       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:36:00.528892       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:36:00.536506       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 18:36:02.448453       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:36:02.548373       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:36:20.038036       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.172.34"}
	I0920 18:36:24.037842       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0920 18:36:24.160995       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.125.164"}
	I0920 18:36:25.628824       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.15.37"}
	I0920 18:36:35.241655       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.221.41"}
	I0920 18:36:38.908463       1 controller.go:615] quota admission added evaluator for: namespaces
	I0920 18:36:39.203892       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.159.17"}
	I0920 18:36:39.235119       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.19.255"}
	
	
	==> kube-controller-manager [de9f0252f7f81f538befe30b3e4415d2a20d2fe40b91c97042907719dae6b9bf] <==
	I0920 18:36:39.107827       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="26.346464ms"
	I0920 18:36:39.132876       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="51.15471ms"
	I0920 18:36:39.157260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="49.21752ms"
	I0920 18:36:39.157359       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="50.466µs"
	I0920 18:36:39.171315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="38.160252ms"
	I0920 18:36:39.171480       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="50.322µs"
	I0920 18:36:39.173332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="34.355µs"
	I0920 18:36:39.204334       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="56.615µs"
	I0920 18:36:59.954269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-023857"
	I0920 18:37:34.329827       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="164.627µs"
	I0920 18:37:38.374601       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.346605ms"
	I0920 18:37:38.374757       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="36.449µs"
	I0920 18:37:43.403327       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="17.051553ms"
	I0920 18:37:43.404435       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="46.98µs"
	I0920 18:37:45.692451       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="114.383µs"
	I0920 18:38:01.334825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-023857"
	I0920 18:39:05.693816       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="207.849µs"
	I0920 18:39:17.690363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="93.217µs"
	I0920 18:40:03.694915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="120.136µs"
	I0920 18:40:14.690646       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="41.22µs"
	I0920 18:41:46.689305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="173.156µs"
	I0920 18:41:58.692621       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="92.672µs"
	I0920 18:43:07.159710       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-023857"
	I0920 18:43:55.692739       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="203.96µs"
	I0920 18:44:08.693168       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="59.262µs"
	
	
	==> kube-controller-manager [e9dc2db11ced2fc4130cfa386ec557a5e2809bc92c435a2ab2ca9ecdcf610fd2] <==
	I0920 18:35:19.387745       1 shared_informer.go:320] Caches are synced for persistent volume
	I0920 18:35:19.393413       1 shared_informer.go:320] Caches are synced for GC
	I0920 18:35:19.417048       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:35:19.418263       1 shared_informer.go:320] Caches are synced for daemon sets
	I0920 18:35:19.421240       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0920 18:35:19.423120       1 shared_informer.go:320] Caches are synced for stateful set
	I0920 18:35:19.427535       1 shared_informer.go:320] Caches are synced for ephemeral
	I0920 18:35:19.437059       1 shared_informer.go:320] Caches are synced for disruption
	I0920 18:35:19.437282       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0920 18:35:19.437351       1 shared_informer.go:320] Caches are synced for PVC protection
	I0920 18:35:19.437363       1 shared_informer.go:320] Caches are synced for attach detach
	I0920 18:35:19.437371       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0920 18:35:19.437384       1 shared_informer.go:320] Caches are synced for taint
	I0920 18:35:19.438987       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0920 18:35:19.440990       1 shared_informer.go:320] Caches are synced for HPA
	I0920 18:35:19.439129       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-023857"
	I0920 18:35:19.441678       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0920 18:35:19.444092       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:35:19.447275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="25.931922ms"
	I0920 18:35:19.448362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="38.988µs"
	I0920 18:35:19.865895       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:35:19.865989       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 18:35:19.884412       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 18:35:21.226645       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.581405ms"
	I0920 18:35:21.227613       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="37.089µs"
	
	
	==> kube-proxy [830a9e361bd83ba6fb475f6c0ea298beea967a4a82d79c6789d50383ddee292c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:36:00.238979       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:36:00.249250       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.93"]
	E0920 18:36:00.249453       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:36:00.382732       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:36:00.382796       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:36:00.382820       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:36:00.388124       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:36:00.388403       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:36:00.388433       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:36:00.390201       1 config.go:199] "Starting service config controller"
	I0920 18:36:00.390235       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:36:00.390264       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:36:00.390284       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:36:00.390621       1 config.go:328] "Starting node config controller"
	I0920 18:36:00.390650       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:36:00.490777       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:36:00.490820       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:36:00.490840       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [de751dc72438df7c2e75ead88ead0ddcf00fa24dece094bfb9b3663fd5324641] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:35:16.537307       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:35:16.545052       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.93"]
	E0920 18:35:16.545119       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:35:16.581543       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:35:16.581622       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:35:16.581655       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:35:16.584109       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:35:16.584352       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:35:16.584523       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:35:16.588103       1 config.go:199] "Starting service config controller"
	I0920 18:35:16.588165       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:35:16.588230       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:35:16.588247       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:35:16.588815       1 config.go:328] "Starting node config controller"
	I0920 18:35:16.588850       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:35:16.689173       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:35:16.689210       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:35:16.689224       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3aefe896edb12efcaef0b550d11d4d87828e02076446a338c5c0a89c582fefd9] <==
	I0920 18:35:57.070969       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:35:58.790018       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:35:58.790112       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:35:58.790122       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:35:58.790127       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:35:58.837959       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:35:58.838043       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:35:58.840042       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:35:58.840133       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:35:58.840176       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:35:58.840281       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:35:58.941665       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8d92ff3168d16da22d6afd2037de4213d705f7e7f3e8b757785a2789640e33b5] <==
	I0920 18:35:12.840959       1 serving.go:386] Generated self-signed cert in-memory
	W0920 18:35:15.856114       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:35:15.856215       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:35:15.856225       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:35:15.856231       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:35:15.907031       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 18:35:15.907125       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:35:15.911497       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 18:35:15.911587       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 18:35:15.911603       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:35:15.911627       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 18:35:16.012554       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:35:40.689387       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0920 18:35:40.689476       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0920 18:35:40.690112       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0920 18:35:40.690416       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 20 18:45:41 functional-023857 kubelet[5361]: E0920 18:45:41.677406    5361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="2af898fd-7b04-41ed-8c9e-0651f29c22bc"
	Sep 20 18:45:45 functional-023857 kubelet[5361]: E0920 18:45:45.918834    5361 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857945918399298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:45:45 functional-023857 kubelet[5361]: E0920 18:45:45.919144    5361 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857945918399298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:45:50 functional-023857 kubelet[5361]: E0920 18:45:50.675469    5361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-p6v98" podUID="1df5da07-a4a6-40cb-85a8-25375612c542"
	Sep 20 18:45:55 functional-023857 kubelet[5361]: E0920 18:45:55.696122    5361 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:45:55 functional-023857 kubelet[5361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:45:55 functional-023857 kubelet[5361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:45:55 functional-023857 kubelet[5361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:45:55 functional-023857 kubelet[5361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:45:55 functional-023857 kubelet[5361]: E0920 18:45:55.920704    5361 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857955920474434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:45:55 functional-023857 kubelet[5361]: E0920 18:45:55.920742    5361 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857955920474434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:46:01 functional-023857 kubelet[5361]: E0920 18:46:01.677090    5361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-p6v98" podUID="1df5da07-a4a6-40cb-85a8-25375612c542"
	Sep 20 18:46:05 functional-023857 kubelet[5361]: E0920 18:46:05.922712    5361 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857965922377741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:46:05 functional-023857 kubelet[5361]: E0920 18:46:05.922821    5361 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857965922377741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:46:15 functional-023857 kubelet[5361]: E0920 18:46:15.676451    5361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-p6v98" podUID="1df5da07-a4a6-40cb-85a8-25375612c542"
	Sep 20 18:46:15 functional-023857 kubelet[5361]: E0920 18:46:15.924325    5361 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857975924042914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:46:15 functional-023857 kubelet[5361]: E0920 18:46:15.924476    5361 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857975924042914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:46:25 functional-023857 kubelet[5361]: E0920 18:46:25.929824    5361 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857985926277718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:46:25 functional-023857 kubelet[5361]: E0920 18:46:25.930159    5361 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857985926277718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:46:26 functional-023857 kubelet[5361]: E0920 18:46:26.488019    5361 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 20 18:46:26 functional-023857 kubelet[5361]: E0920 18:46:26.488276    5361 kuberuntime_image.go:55] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 20 18:46:26 functional-023857 kubelet[5361]: E0920 18:46:26.488617    5361 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59dqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(2af898fd-7b04-41ed-8c9e-0651f29c22bc): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 20 18:46:26 functional-023857 kubelet[5361]: E0920 18:46:26.490022    5361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2af898fd-7b04-41ed-8c9e-0651f29c22bc"
	Sep 20 18:46:35 functional-023857 kubelet[5361]: E0920 18:46:35.931651    5361 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857995931431765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:46:35 functional-023857 kubelet[5361]: E0920 18:46:35.931697    5361 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857995931431765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232799,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [98422586941ed5d5be11194f2972f46131b0c8df14c0c534d99ff741cda814ea] <==
	2024/09/20 18:37:42 Starting overwatch
	2024/09/20 18:37:42 Using namespace: kubernetes-dashboard
	2024/09/20 18:37:42 Using in-cluster config to connect to apiserver
	2024/09/20 18:37:42 Using secret token for csrf signing
	2024/09/20 18:37:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/20 18:37:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/20 18:37:42 Successful initial request to the apiserver, version: v1.31.1
	2024/09/20 18:37:42 Generating JWE encryption key
	2024/09/20 18:37:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/20 18:37:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/20 18:37:42 Initializing JWE encryption key from synchronized object
	2024/09/20 18:37:42 Creating in-cluster Sidecar client
	2024/09/20 18:37:42 Serving insecurely on HTTP port: 9090
	2024/09/20 18:37:42 Successful request to sidecar
	
	
	==> storage-provisioner [4886699ad63bba9d38ef13eb8597a566114a9ed695dd0a63c2b15c14676c6ba9] <==
	I0920 18:35:16.348895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:35:16.380208       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:35:16.380259       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:35:33.786576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:35:33.786783       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-023857_4c4a8201-b60b-4e99-b3c0-0e0168c9d5a6!
	I0920 18:35:33.787287       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bc7a283-94d6-45db-a725-a48b7fea6f02", APIVersion:"v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-023857_4c4a8201-b60b-4e99-b3c0-0e0168c9d5a6 became leader
	I0920 18:35:33.887443       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-023857_4c4a8201-b60b-4e99-b3c0-0e0168c9d5a6!
	
	
	==> storage-provisioner [a895e49e3fae6b133f7d6971a77f242750e509c1acf7736347302588367441d4] <==
	I0920 18:35:59.972851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:36:00.027239       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:36:00.027735       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:36:17.438110       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:36:17.438832       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bc7a283-94d6-45db-a725-a48b7fea6f02", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-023857_ab64c720-1759-4961-a184-4cbec2442aad became leader
	I0920 18:36:17.438873       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-023857_ab64c720-1759-4961-a184-4cbec2442aad!
	I0920 18:36:17.540077       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-023857_ab64c720-1759-4961-a184-4cbec2442aad!
	I0920 18:36:29.538438       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0920 18:36:29.546992       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"bf4d0cb7-4337-43e3-8357-543af67d4579", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0920 18:36:29.546558       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    08b137ab-dca1-457e-9462-73164eff8ffa 347 0 2024-09-20 18:34:45 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-20 18:34:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-bf4d0cb7-4337-43e3-8357-543af67d4579 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  bf4d0cb7-4337-43e3-8357-543af67d4579 690 0 2024-09-20 18:36:29 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-20 18:36:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-20 18:36:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0920 18:36:29.559419       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-bf4d0cb7-4337-43e3-8357-543af67d4579" provisioned
	I0920 18:36:29.559471       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0920 18:36:29.559486       1 volume_store.go:212] Trying to save persistentvolume "pvc-bf4d0cb7-4337-43e3-8357-543af67d4579"
	I0920 18:36:29.593396       1 volume_store.go:219] persistentvolume "pvc-bf4d0cb7-4337-43e3-8357-543af67d4579" saved
	I0920 18:36:29.597864       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"bf4d0cb7-4337-43e3-8357-543af67d4579", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-bf4d0cb7-4337-43e3-8357-543af67d4579
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-023857 -n functional-023857
helpers_test.go:261: (dbg) Run:  kubectl --context functional-023857 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-p6v98 sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-023857 describe pod busybox-mount mysql-6cdb49bbb-p6v98 sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-023857 describe pod busybox-mount mysql-6cdb49bbb-p6v98 sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-023857/192.168.39.93
	Start Time:       Fri, 20 Sep 2024 18:36:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  cri-o://c1603e7fc015d5c007156a7ce65b313c1ca3bade746930b1bcef783bce75b237
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 20 Sep 2024 18:37:35 +0000
	      Finished:     Fri, 20 Sep 2024 18:37:35 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tw7s5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tw7s5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-023857
	  Normal  Pulling    9m59s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m2s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.169s (57.02s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m2s   kubelet            Created container mount-munger
	  Normal  Started    9m2s   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-p6v98
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-023857/192.168.39.93
	Start Time:       Fri, 20 Sep 2024 18:36:35 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zsfmt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zsfmt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-6cdb49bbb-p6v98 to functional-023857
	  Warning  Failed     7m47s                 kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m56s (x4 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     5m6s (x3 over 9m4s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m6s (x4 over 9m4s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m39s (x7 over 9m3s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m39s (x7 over 9m3s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-023857/192.168.39.93
	Start Time:       Fri, 20 Sep 2024 18:36:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59dqs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-59dqs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-023857
	  Warning  Failed     7m17s (x2 over 8m24s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m22s (x4 over 10m)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     5m37s (x2 over 9m34s)  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m37s (x4 over 9m34s)  kubelet            Error: ErrImagePull
	  Warning  Failed     5m22s (x6 over 9m33s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m55s (x8 over 9m33s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 node stop m02 -v=7 --alsologtostderr
E0920 18:51:18.969679  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:24.179214  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:24.185576  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:24.196939  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:24.218325  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:24.259775  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:24.341209  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:24.502765  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:24.824464  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:25.465769  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:26.747021  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:29.308886  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:34.430357  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:44.672118  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:05.153748  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:46.115702  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-525790 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.469390733s)

                                                
                                                
-- stdout --
	* Stopping node "ha-525790-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:51:05.621919  767021 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:51:05.622179  767021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:51:05.622190  767021 out.go:358] Setting ErrFile to fd 2...
	I0920 18:51:05.622194  767021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:51:05.622359  767021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:51:05.622645  767021 mustload.go:65] Loading cluster: ha-525790
	I0920 18:51:05.623083  767021 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:51:05.623106  767021 stop.go:39] StopHost: ha-525790-m02
	I0920 18:51:05.623481  767021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:51:05.623524  767021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:51:05.639466  767021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36215
	I0920 18:51:05.639979  767021 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:51:05.640621  767021 main.go:141] libmachine: Using API Version  1
	I0920 18:51:05.640646  767021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:51:05.641096  767021 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:51:05.643441  767021 out.go:177] * Stopping node "ha-525790-m02"  ...
	I0920 18:51:05.644838  767021 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:51:05.644869  767021 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:51:05.645128  767021 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:51:05.645172  767021 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:51:05.648480  767021 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:51:05.649057  767021 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:51:05.649103  767021 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:51:05.649229  767021 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:51:05.649420  767021 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:51:05.649593  767021 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:51:05.649738  767021 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:51:05.736266  767021 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:51:05.790517  767021 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:51:05.848203  767021 main.go:141] libmachine: Stopping "ha-525790-m02"...
	I0920 18:51:05.848249  767021 main.go:141] libmachine: (ha-525790-m02) Calling .GetState
	I0920 18:51:05.849838  767021 main.go:141] libmachine: (ha-525790-m02) Calling .Stop
	I0920 18:51:05.853424  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 0/120
	I0920 18:51:06.854813  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 1/120
	I0920 18:51:07.856211  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 2/120
	I0920 18:51:08.857518  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 3/120
	I0920 18:51:09.859041  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 4/120
	I0920 18:51:10.861004  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 5/120
	I0920 18:51:11.863069  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 6/120
	I0920 18:51:12.865306  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 7/120
	I0920 18:51:13.866712  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 8/120
	I0920 18:51:14.868630  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 9/120
	I0920 18:51:15.870938  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 10/120
	I0920 18:51:16.872251  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 11/120
	I0920 18:51:17.873748  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 12/120
	I0920 18:51:18.875227  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 13/120
	I0920 18:51:19.877324  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 14/120
	I0920 18:51:20.879392  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 15/120
	I0920 18:51:21.881372  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 16/120
	I0920 18:51:22.883548  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 17/120
	I0920 18:51:23.885590  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 18/120
	I0920 18:51:24.886936  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 19/120
	I0920 18:51:25.889108  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 20/120
	I0920 18:51:26.890458  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 21/120
	I0920 18:51:27.891922  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 22/120
	I0920 18:51:28.893162  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 23/120
	I0920 18:51:29.894487  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 24/120
	I0920 18:51:30.896443  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 25/120
	I0920 18:51:31.897745  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 26/120
	I0920 18:51:32.899067  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 27/120
	I0920 18:51:33.901562  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 28/120
	I0920 18:51:34.902936  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 29/120
	I0920 18:51:35.904900  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 30/120
	I0920 18:51:36.906735  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 31/120
	I0920 18:51:37.908030  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 32/120
	I0920 18:51:38.909263  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 33/120
	I0920 18:51:39.910492  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 34/120
	I0920 18:51:40.912346  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 35/120
	I0920 18:51:41.913623  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 36/120
	I0920 18:51:42.914977  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 37/120
	I0920 18:51:43.916283  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 38/120
	I0920 18:51:44.917697  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 39/120
	I0920 18:51:45.919827  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 40/120
	I0920 18:51:46.921336  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 41/120
	I0920 18:51:47.922549  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 42/120
	I0920 18:51:48.924567  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 43/120
	I0920 18:51:49.926041  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 44/120
	I0920 18:51:50.928057  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 45/120
	I0920 18:51:51.929489  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 46/120
	I0920 18:51:52.930865  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 47/120
	I0920 18:51:53.932169  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 48/120
	I0920 18:51:54.933359  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 49/120
	I0920 18:51:55.934864  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 50/120
	I0920 18:51:56.936088  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 51/120
	I0920 18:51:57.937473  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 52/120
	I0920 18:51:58.938720  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 53/120
	I0920 18:51:59.939987  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 54/120
	I0920 18:52:00.942077  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 55/120
	I0920 18:52:01.943527  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 56/120
	I0920 18:52:02.945098  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 57/120
	I0920 18:52:03.946797  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 58/120
	I0920 18:52:04.948227  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 59/120
	I0920 18:52:05.950155  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 60/120
	I0920 18:52:06.951456  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 61/120
	I0920 18:52:07.952781  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 62/120
	I0920 18:52:08.954046  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 63/120
	I0920 18:52:09.955359  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 64/120
	I0920 18:52:10.957127  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 65/120
	I0920 18:52:11.958411  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 66/120
	I0920 18:52:12.959913  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 67/120
	I0920 18:52:13.961571  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 68/120
	I0920 18:52:14.962912  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 69/120
	I0920 18:52:15.965215  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 70/120
	I0920 18:52:16.966527  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 71/120
	I0920 18:52:17.967832  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 72/120
	I0920 18:52:18.969305  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 73/120
	I0920 18:52:19.971174  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 74/120
	I0920 18:52:20.972956  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 75/120
	I0920 18:52:21.974215  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 76/120
	I0920 18:52:22.975624  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 77/120
	I0920 18:52:23.977491  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 78/120
	I0920 18:52:24.978823  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 79/120
	I0920 18:52:25.980747  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 80/120
	I0920 18:52:26.982056  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 81/120
	I0920 18:52:27.983374  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 82/120
	I0920 18:52:28.984694  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 83/120
	I0920 18:52:29.985965  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 84/120
	I0920 18:52:30.987944  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 85/120
	I0920 18:52:31.989290  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 86/120
	I0920 18:52:32.990663  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 87/120
	I0920 18:52:33.992058  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 88/120
	I0920 18:52:34.993454  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 89/120
	I0920 18:52:35.995237  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 90/120
	I0920 18:52:36.996621  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 91/120
	I0920 18:52:37.998315  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 92/120
	I0920 18:52:39.000313  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 93/120
	I0920 18:52:40.001907  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 94/120
	I0920 18:52:41.003662  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 95/120
	I0920 18:52:42.004994  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 96/120
	I0920 18:52:43.006248  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 97/120
	I0920 18:52:44.007817  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 98/120
	I0920 18:52:45.010107  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 99/120
	I0920 18:52:46.012277  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 100/120
	I0920 18:52:47.013670  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 101/120
	I0920 18:52:48.015230  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 102/120
	I0920 18:52:49.017450  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 103/120
	I0920 18:52:50.018906  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 104/120
	I0920 18:52:51.020361  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 105/120
	I0920 18:52:52.021721  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 106/120
	I0920 18:52:53.023598  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 107/120
	I0920 18:52:54.024978  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 108/120
	I0920 18:52:55.026623  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 109/120
	I0920 18:52:56.028705  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 110/120
	I0920 18:52:57.030220  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 111/120
	I0920 18:52:58.031506  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 112/120
	I0920 18:52:59.033374  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 113/120
	I0920 18:53:00.035793  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 114/120
	I0920 18:53:01.038150  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 115/120
	I0920 18:53:02.039737  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 116/120
	I0920 18:53:03.042320  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 117/120
	I0920 18:53:04.044137  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 118/120
	I0920 18:53:05.045379  767021 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 119/120
	I0920 18:53:06.046839  767021 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 18:53:06.047024  767021 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-525790 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr: (18.735755012s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-525790 -n ha-525790
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-525790 logs -n 25: (1.449490149s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790:/home/docker/cp-test_ha-525790-m03_ha-525790.txt                       |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790 sudo cat                                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790.txt                                 |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m02:/home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m04 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp testdata/cp-test.txt                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790:/home/docker/cp-test_ha-525790-m04_ha-525790.txt                       |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790 sudo cat                                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790.txt                                 |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m02:/home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03:/home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m03 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-525790 node stop m02 -v=7                                                     | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:46:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:46:38.789149  762988 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:46:38.789304  762988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:46:38.789316  762988 out.go:358] Setting ErrFile to fd 2...
	I0920 18:46:38.789323  762988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:46:38.789530  762988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:46:38.790164  762988 out.go:352] Setting JSON to false
	I0920 18:46:38.791213  762988 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8949,"bootTime":1726849050,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:46:38.791325  762988 start.go:139] virtualization: kvm guest
	I0920 18:46:38.794321  762988 out.go:177] * [ha-525790] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:46:38.795880  762988 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:46:38.795921  762988 notify.go:220] Checking for updates...
	I0920 18:46:38.798815  762988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:46:38.800212  762988 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:46:38.801657  762988 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:46:38.802936  762988 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:46:38.804312  762988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:46:38.805745  762988 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:46:38.840721  762988 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:46:38.841998  762988 start.go:297] selected driver: kvm2
	I0920 18:46:38.842017  762988 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:46:38.842030  762988 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:46:38.842791  762988 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:46:38.842923  762988 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:46:38.857953  762988 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:46:38.858007  762988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:46:38.858244  762988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:46:38.858274  762988 cni.go:84] Creating CNI manager for ""
	I0920 18:46:38.858324  762988 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 18:46:38.858332  762988 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:46:38.858385  762988 start.go:340] cluster config:
	{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0920 18:46:38.858482  762988 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:46:38.861017  762988 out.go:177] * Starting "ha-525790" primary control-plane node in "ha-525790" cluster
	I0920 18:46:38.862480  762988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:46:38.862534  762988 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:46:38.862548  762988 cache.go:56] Caching tarball of preloaded images
	I0920 18:46:38.862674  762988 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:46:38.862687  762988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:46:38.863061  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:46:38.863096  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json: {Name:mk5c775b0f6d6c9cf399952e81d482461c2f3276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:46:38.863265  762988 start.go:360] acquireMachinesLock for ha-525790: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:46:38.863304  762988 start.go:364] duration metric: took 22.887µs to acquireMachinesLock for "ha-525790"
	I0920 18:46:38.863326  762988 start.go:93] Provisioning new machine with config: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:46:38.863386  762988 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:46:38.865997  762988 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:46:38.866141  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:46:38.866188  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:46:38.881131  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0920 18:46:38.881605  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:46:38.882180  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:46:38.882202  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:46:38.882573  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:46:38.882762  762988 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:46:38.882960  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:46:38.883106  762988 start.go:159] libmachine.API.Create for "ha-525790" (driver="kvm2")
	I0920 18:46:38.883131  762988 client.go:168] LocalClient.Create starting
	I0920 18:46:38.883164  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:46:38.883195  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:46:38.883209  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:46:38.883266  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:46:38.883283  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:46:38.883293  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:46:38.883309  762988 main.go:141] libmachine: Running pre-create checks...
	I0920 18:46:38.883317  762988 main.go:141] libmachine: (ha-525790) Calling .PreCreateCheck
	I0920 18:46:38.883674  762988 main.go:141] libmachine: (ha-525790) Calling .GetConfigRaw
	I0920 18:46:38.884046  762988 main.go:141] libmachine: Creating machine...
	I0920 18:46:38.884058  762988 main.go:141] libmachine: (ha-525790) Calling .Create
	I0920 18:46:38.884186  762988 main.go:141] libmachine: (ha-525790) Creating KVM machine...
	I0920 18:46:38.885388  762988 main.go:141] libmachine: (ha-525790) DBG | found existing default KVM network
	I0920 18:46:38.886155  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:38.886012  763011 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bb0}
	I0920 18:46:38.886212  762988 main.go:141] libmachine: (ha-525790) DBG | created network xml: 
	I0920 18:46:38.886231  762988 main.go:141] libmachine: (ha-525790) DBG | <network>
	I0920 18:46:38.886238  762988 main.go:141] libmachine: (ha-525790) DBG |   <name>mk-ha-525790</name>
	I0920 18:46:38.886242  762988 main.go:141] libmachine: (ha-525790) DBG |   <dns enable='no'/>
	I0920 18:46:38.886247  762988 main.go:141] libmachine: (ha-525790) DBG |   
	I0920 18:46:38.886265  762988 main.go:141] libmachine: (ha-525790) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:46:38.886272  762988 main.go:141] libmachine: (ha-525790) DBG |     <dhcp>
	I0920 18:46:38.886279  762988 main.go:141] libmachine: (ha-525790) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:46:38.886301  762988 main.go:141] libmachine: (ha-525790) DBG |     </dhcp>
	I0920 18:46:38.886355  762988 main.go:141] libmachine: (ha-525790) DBG |   </ip>
	I0920 18:46:38.886369  762988 main.go:141] libmachine: (ha-525790) DBG |   
	I0920 18:46:38.886374  762988 main.go:141] libmachine: (ha-525790) DBG | </network>
	I0920 18:46:38.886382  762988 main.go:141] libmachine: (ha-525790) DBG | 
	I0920 18:46:38.891425  762988 main.go:141] libmachine: (ha-525790) DBG | trying to create private KVM network mk-ha-525790 192.168.39.0/24...
	I0920 18:46:38.955444  762988 main.go:141] libmachine: (ha-525790) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790 ...
	I0920 18:46:38.955497  762988 main.go:141] libmachine: (ha-525790) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:46:38.955509  762988 main.go:141] libmachine: (ha-525790) DBG | private KVM network mk-ha-525790 192.168.39.0/24 created
	I0920 18:46:38.955527  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:38.955388  763011 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:46:38.955546  762988 main.go:141] libmachine: (ha-525790) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:46:39.243592  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:39.243485  763011 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa...
	I0920 18:46:39.608366  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:39.608221  763011 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/ha-525790.rawdisk...
	I0920 18:46:39.608404  762988 main.go:141] libmachine: (ha-525790) DBG | Writing magic tar header
	I0920 18:46:39.608446  762988 main.go:141] libmachine: (ha-525790) DBG | Writing SSH key tar header
	I0920 18:46:39.608516  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:39.608475  763011 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790 ...
	I0920 18:46:39.608599  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790
	I0920 18:46:39.608627  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790 (perms=drwx------)
	I0920 18:46:39.608656  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:46:39.608670  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:46:39.608683  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:46:39.608695  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:46:39.608706  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:46:39.608718  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:46:39.608730  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:46:39.608740  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home
	I0920 18:46:39.608750  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:46:39.608763  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:46:39.608777  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:46:39.608788  762988 main.go:141] libmachine: (ha-525790) Creating domain...
	I0920 18:46:39.608796  762988 main.go:141] libmachine: (ha-525790) DBG | Skipping /home - not owner
	I0920 18:46:39.609887  762988 main.go:141] libmachine: (ha-525790) define libvirt domain using xml: 
	I0920 18:46:39.609929  762988 main.go:141] libmachine: (ha-525790) <domain type='kvm'>
	I0920 18:46:39.609936  762988 main.go:141] libmachine: (ha-525790)   <name>ha-525790</name>
	I0920 18:46:39.609941  762988 main.go:141] libmachine: (ha-525790)   <memory unit='MiB'>2200</memory>
	I0920 18:46:39.609946  762988 main.go:141] libmachine: (ha-525790)   <vcpu>2</vcpu>
	I0920 18:46:39.609950  762988 main.go:141] libmachine: (ha-525790)   <features>
	I0920 18:46:39.609954  762988 main.go:141] libmachine: (ha-525790)     <acpi/>
	I0920 18:46:39.609958  762988 main.go:141] libmachine: (ha-525790)     <apic/>
	I0920 18:46:39.609963  762988 main.go:141] libmachine: (ha-525790)     <pae/>
	I0920 18:46:39.609972  762988 main.go:141] libmachine: (ha-525790)     
	I0920 18:46:39.609977  762988 main.go:141] libmachine: (ha-525790)   </features>
	I0920 18:46:39.609981  762988 main.go:141] libmachine: (ha-525790)   <cpu mode='host-passthrough'>
	I0920 18:46:39.609988  762988 main.go:141] libmachine: (ha-525790)   
	I0920 18:46:39.609991  762988 main.go:141] libmachine: (ha-525790)   </cpu>
	I0920 18:46:39.609996  762988 main.go:141] libmachine: (ha-525790)   <os>
	I0920 18:46:39.610000  762988 main.go:141] libmachine: (ha-525790)     <type>hvm</type>
	I0920 18:46:39.610004  762988 main.go:141] libmachine: (ha-525790)     <boot dev='cdrom'/>
	I0920 18:46:39.610012  762988 main.go:141] libmachine: (ha-525790)     <boot dev='hd'/>
	I0920 18:46:39.610034  762988 main.go:141] libmachine: (ha-525790)     <bootmenu enable='no'/>
	I0920 18:46:39.610055  762988 main.go:141] libmachine: (ha-525790)   </os>
	I0920 18:46:39.610063  762988 main.go:141] libmachine: (ha-525790)   <devices>
	I0920 18:46:39.610071  762988 main.go:141] libmachine: (ha-525790)     <disk type='file' device='cdrom'>
	I0920 18:46:39.610087  762988 main.go:141] libmachine: (ha-525790)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/boot2docker.iso'/>
	I0920 18:46:39.610097  762988 main.go:141] libmachine: (ha-525790)       <target dev='hdc' bus='scsi'/>
	I0920 18:46:39.610105  762988 main.go:141] libmachine: (ha-525790)       <readonly/>
	I0920 18:46:39.610111  762988 main.go:141] libmachine: (ha-525790)     </disk>
	I0920 18:46:39.610117  762988 main.go:141] libmachine: (ha-525790)     <disk type='file' device='disk'>
	I0920 18:46:39.610124  762988 main.go:141] libmachine: (ha-525790)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:46:39.610165  762988 main.go:141] libmachine: (ha-525790)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/ha-525790.rawdisk'/>
	I0920 18:46:39.610187  762988 main.go:141] libmachine: (ha-525790)       <target dev='hda' bus='virtio'/>
	I0920 18:46:39.610197  762988 main.go:141] libmachine: (ha-525790)     </disk>
	I0920 18:46:39.610210  762988 main.go:141] libmachine: (ha-525790)     <interface type='network'>
	I0920 18:46:39.610222  762988 main.go:141] libmachine: (ha-525790)       <source network='mk-ha-525790'/>
	I0920 18:46:39.610232  762988 main.go:141] libmachine: (ha-525790)       <model type='virtio'/>
	I0920 18:46:39.610240  762988 main.go:141] libmachine: (ha-525790)     </interface>
	I0920 18:46:39.610250  762988 main.go:141] libmachine: (ha-525790)     <interface type='network'>
	I0920 18:46:39.610258  762988 main.go:141] libmachine: (ha-525790)       <source network='default'/>
	I0920 18:46:39.610275  762988 main.go:141] libmachine: (ha-525790)       <model type='virtio'/>
	I0920 18:46:39.610283  762988 main.go:141] libmachine: (ha-525790)     </interface>
	I0920 18:46:39.610288  762988 main.go:141] libmachine: (ha-525790)     <serial type='pty'>
	I0920 18:46:39.610292  762988 main.go:141] libmachine: (ha-525790)       <target port='0'/>
	I0920 18:46:39.610299  762988 main.go:141] libmachine: (ha-525790)     </serial>
	I0920 18:46:39.610308  762988 main.go:141] libmachine: (ha-525790)     <console type='pty'>
	I0920 18:46:39.610326  762988 main.go:141] libmachine: (ha-525790)       <target type='serial' port='0'/>
	I0920 18:46:39.610338  762988 main.go:141] libmachine: (ha-525790)     </console>
	I0920 18:46:39.610349  762988 main.go:141] libmachine: (ha-525790)     <rng model='virtio'>
	I0920 18:46:39.610362  762988 main.go:141] libmachine: (ha-525790)       <backend model='random'>/dev/random</backend>
	I0920 18:46:39.610371  762988 main.go:141] libmachine: (ha-525790)     </rng>
	I0920 18:46:39.610375  762988 main.go:141] libmachine: (ha-525790)     
	I0920 18:46:39.610381  762988 main.go:141] libmachine: (ha-525790)     
	I0920 18:46:39.610387  762988 main.go:141] libmachine: (ha-525790)   </devices>
	I0920 18:46:39.610397  762988 main.go:141] libmachine: (ha-525790) </domain>
	I0920 18:46:39.610405  762988 main.go:141] libmachine: (ha-525790) 
	I0920 18:46:39.614486  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:50:2a:69 in network default
	I0920 18:46:39.615032  762988 main.go:141] libmachine: (ha-525790) Ensuring networks are active...
	I0920 18:46:39.615051  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:39.615715  762988 main.go:141] libmachine: (ha-525790) Ensuring network default is active
	I0920 18:46:39.616018  762988 main.go:141] libmachine: (ha-525790) Ensuring network mk-ha-525790 is active
	I0920 18:46:39.616415  762988 main.go:141] libmachine: (ha-525790) Getting domain xml...
	I0920 18:46:39.617025  762988 main.go:141] libmachine: (ha-525790) Creating domain...
	I0920 18:46:40.795742  762988 main.go:141] libmachine: (ha-525790) Waiting to get IP...
	I0920 18:46:40.796420  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:40.796852  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:40.796878  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:40.796826  763011 retry.go:31] will retry after 263.82587ms: waiting for machine to come up
	I0920 18:46:41.062273  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:41.062647  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:41.062678  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:41.062592  763011 retry.go:31] will retry after 386.712635ms: waiting for machine to come up
	I0920 18:46:41.451226  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:41.451632  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:41.451661  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:41.451579  763011 retry.go:31] will retry after 342.693912ms: waiting for machine to come up
	I0920 18:46:41.796191  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:41.796691  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:41.796715  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:41.796648  763011 retry.go:31] will retry after 576.710058ms: waiting for machine to come up
	I0920 18:46:42.375515  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:42.376036  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:42.376061  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:42.375999  763011 retry.go:31] will retry after 663.670245ms: waiting for machine to come up
	I0920 18:46:43.040735  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:43.041215  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:43.041246  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:43.041140  763011 retry.go:31] will retry after 597.358521ms: waiting for machine to come up
	I0920 18:46:43.639686  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:43.640007  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:43.640036  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:43.639963  763011 retry.go:31] will retry after 1.058911175s: waiting for machine to come up
	I0920 18:46:44.700947  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:44.701385  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:44.701413  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:44.701343  763011 retry.go:31] will retry after 1.038799294s: waiting for machine to come up
	I0920 18:46:45.741663  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:45.742102  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:45.742126  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:45.742045  763011 retry.go:31] will retry after 1.383433424s: waiting for machine to come up
	I0920 18:46:47.127537  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:47.128058  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:47.128078  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:47.127983  763011 retry.go:31] will retry after 1.617569351s: waiting for machine to come up
	I0920 18:46:48.747698  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:48.748209  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:48.748240  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:48.748143  763011 retry.go:31] will retry after 2.371010271s: waiting for machine to come up
	I0920 18:46:51.120964  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:51.121427  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:51.121458  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:51.121379  763011 retry.go:31] will retry after 2.200163157s: waiting for machine to come up
	I0920 18:46:53.322674  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:53.322965  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:53.322986  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:53.322923  763011 retry.go:31] will retry after 3.176543377s: waiting for machine to come up
	I0920 18:46:56.502595  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:56.502881  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:56.502907  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:56.502808  763011 retry.go:31] will retry after 5.194371334s: waiting for machine to come up
	I0920 18:47:01.701005  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.701389  762988 main.go:141] libmachine: (ha-525790) Found IP for machine: 192.168.39.149
	I0920 18:47:01.701409  762988 main.go:141] libmachine: (ha-525790) Reserving static IP address...
	I0920 18:47:01.701417  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has current primary IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.701762  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find host DHCP lease matching {name: "ha-525790", mac: "52:54:00:93:48:3a", ip: "192.168.39.149"} in network mk-ha-525790
	I0920 18:47:01.773329  762988 main.go:141] libmachine: (ha-525790) DBG | Getting to WaitForSSH function...
	I0920 18:47:01.773358  762988 main.go:141] libmachine: (ha-525790) Reserved static IP address: 192.168.39.149
	I0920 18:47:01.773388  762988 main.go:141] libmachine: (ha-525790) Waiting for SSH to be available...
	I0920 18:47:01.776048  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.776426  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:01.776463  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.776622  762988 main.go:141] libmachine: (ha-525790) DBG | Using SSH client type: external
	I0920 18:47:01.776646  762988 main.go:141] libmachine: (ha-525790) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa (-rw-------)
	I0920 18:47:01.776683  762988 main.go:141] libmachine: (ha-525790) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:47:01.776700  762988 main.go:141] libmachine: (ha-525790) DBG | About to run SSH command:
	I0920 18:47:01.776715  762988 main.go:141] libmachine: (ha-525790) DBG | exit 0
	I0920 18:47:01.898967  762988 main.go:141] libmachine: (ha-525790) DBG | SSH cmd err, output: <nil>: 
	I0920 18:47:01.899221  762988 main.go:141] libmachine: (ha-525790) KVM machine creation complete!
	I0920 18:47:01.899544  762988 main.go:141] libmachine: (ha-525790) Calling .GetConfigRaw
	I0920 18:47:01.900277  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:01.900493  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:01.900650  762988 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:47:01.900666  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:01.901918  762988 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:47:01.901931  762988 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:47:01.901936  762988 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:47:01.901941  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:01.904499  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.904882  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:01.904911  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.905023  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:01.905203  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:01.905333  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:01.905455  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:01.905648  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:01.905950  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:01.905967  762988 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:47:02.002303  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:47:02.002325  762988 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:47:02.002332  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.005206  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.005502  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.005524  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.005703  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.005932  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.006115  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.006265  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.006494  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.006725  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.006738  762988 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:47:02.103696  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:47:02.103818  762988 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:47:02.103834  762988 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:47:02.103845  762988 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:47:02.104117  762988 buildroot.go:166] provisioning hostname "ha-525790"
	I0920 18:47:02.104147  762988 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:47:02.104362  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.107026  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.107445  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.107466  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.107725  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.107909  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.108050  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.108218  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.108380  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.108558  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.108576  762988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790 && echo "ha-525790" | sudo tee /etc/hostname
	I0920 18:47:02.221193  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:47:02.221225  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.224188  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.224526  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.224548  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.224771  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.224973  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.225135  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.225274  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.225455  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.225692  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.225716  762988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:47:02.333039  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:47:02.333077  762988 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:47:02.333139  762988 buildroot.go:174] setting up certificates
	I0920 18:47:02.333156  762988 provision.go:84] configureAuth start
	I0920 18:47:02.333175  762988 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:47:02.333477  762988 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:47:02.336179  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.336437  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.336466  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.336621  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.338903  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.339190  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.339228  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.339347  762988 provision.go:143] copyHostCerts
	I0920 18:47:02.339388  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:47:02.339428  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:47:02.339443  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:47:02.339511  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:47:02.339645  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:47:02.339667  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:47:02.339674  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:47:02.339705  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:47:02.339762  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:47:02.339781  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:47:02.339788  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:47:02.339812  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:47:02.339874  762988 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790 san=[127.0.0.1 192.168.39.149 ha-525790 localhost minikube]
	I0920 18:47:02.453692  762988 provision.go:177] copyRemoteCerts
	I0920 18:47:02.453777  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:47:02.453804  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.456622  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.456981  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.457012  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.457155  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.457322  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.457514  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.457694  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:02.537102  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:47:02.537192  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:47:02.561583  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:47:02.561653  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0920 18:47:02.584887  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:47:02.584963  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:47:02.607882  762988 provision.go:87] duration metric: took 274.708599ms to configureAuth
	I0920 18:47:02.607913  762988 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:47:02.608135  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:02.608263  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.610585  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.610941  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.610966  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.611170  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.611364  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.611566  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.611733  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.611901  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.612097  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.612128  762988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:47:02.825619  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:47:02.825649  762988 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:47:02.825670  762988 main.go:141] libmachine: (ha-525790) Calling .GetURL
	I0920 18:47:02.826777  762988 main.go:141] libmachine: (ha-525790) DBG | Using libvirt version 6000000
	I0920 18:47:02.828685  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.829016  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.829041  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.829240  762988 main.go:141] libmachine: Docker is up and running!
	I0920 18:47:02.829256  762988 main.go:141] libmachine: Reticulating splines...
	I0920 18:47:02.829269  762988 client.go:171] duration metric: took 23.94612541s to LocalClient.Create
	I0920 18:47:02.829292  762988 start.go:167] duration metric: took 23.946187981s to libmachine.API.Create "ha-525790"
	I0920 18:47:02.829302  762988 start.go:293] postStartSetup for "ha-525790" (driver="kvm2")
	I0920 18:47:02.829311  762988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:47:02.829329  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:02.829550  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:47:02.829607  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.831515  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.831740  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.831770  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.831871  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.832029  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.832155  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.832317  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:02.912925  762988 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:47:02.917265  762988 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:47:02.917289  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:47:02.917365  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:47:02.917439  762988 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:47:02.917449  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:47:02.917538  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:47:02.926976  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:47:02.950998  762988 start.go:296] duration metric: took 121.680006ms for postStartSetup
	I0920 18:47:02.951052  762988 main.go:141] libmachine: (ha-525790) Calling .GetConfigRaw
	I0920 18:47:02.951761  762988 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:47:02.954370  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.954692  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.954720  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.954955  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:47:02.955155  762988 start.go:128] duration metric: took 24.09175682s to createHost
	I0920 18:47:02.955178  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.957364  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.957683  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.957707  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.957847  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.958049  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.958195  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.958370  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.958531  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.958721  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.958745  762988 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:47:03.055624  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858023.014434190
	
	I0920 18:47:03.055646  762988 fix.go:216] guest clock: 1726858023.014434190
	I0920 18:47:03.055653  762988 fix.go:229] Guest: 2024-09-20 18:47:03.01443419 +0000 UTC Remote: 2024-09-20 18:47:02.955165997 +0000 UTC m=+24.204227210 (delta=59.268193ms)
	I0920 18:47:03.055673  762988 fix.go:200] guest clock delta is within tolerance: 59.268193ms
	I0920 18:47:03.055678  762988 start.go:83] releasing machines lock for "ha-525790", held for 24.192365497s
	I0920 18:47:03.055696  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:03.056004  762988 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:47:03.058619  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.058967  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:03.059002  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.059176  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:03.059645  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:03.059786  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:03.059913  762988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:47:03.059955  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:03.060006  762988 ssh_runner.go:195] Run: cat /version.json
	I0920 18:47:03.060036  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:03.062498  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.062744  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.062833  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:03.062884  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.063020  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:03.063078  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:03.063109  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.063168  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:03.063236  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:03.063307  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:03.063405  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:03.063423  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:03.063542  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:03.063665  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:03.136335  762988 ssh_runner.go:195] Run: systemctl --version
	I0920 18:47:03.170125  762988 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:47:03.331364  762988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:47:03.337153  762988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:47:03.337233  762988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:47:03.353297  762988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:47:03.353324  762988 start.go:495] detecting cgroup driver to use...
	I0920 18:47:03.353385  762988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:47:03.369816  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:47:03.383774  762988 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:47:03.383838  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:47:03.397487  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:47:03.411243  762988 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:47:03.523455  762988 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:47:03.671823  762988 docker.go:233] disabling docker service ...
	I0920 18:47:03.671918  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:47:03.687139  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:47:03.700569  762988 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:47:03.840971  762988 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:47:03.962385  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:47:03.976750  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:47:03.995774  762988 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:47:03.995835  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.007019  762988 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:47:04.007124  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.018001  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.028509  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.039860  762988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:47:04.050769  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.061191  762988 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.077692  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.088041  762988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:47:04.097754  762988 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:47:04.097807  762988 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:47:04.110739  762988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:47:04.120636  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:47:04.245299  762988 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:47:04.341170  762988 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:47:04.341258  762988 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:47:04.345975  762988 start.go:563] Will wait 60s for crictl version
	I0920 18:47:04.346047  762988 ssh_runner.go:195] Run: which crictl
	I0920 18:47:04.349925  762988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:47:04.390230  762988 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:47:04.390341  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:47:04.418445  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:47:04.447740  762988 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:47:04.448969  762988 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:47:04.451547  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:04.451921  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:04.451950  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:04.452148  762988 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:47:04.456198  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:47:04.470013  762988 kubeadm.go:883] updating cluster {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:47:04.470186  762988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:47:04.470265  762988 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:47:04.502535  762988 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:47:04.502609  762988 ssh_runner.go:195] Run: which lz4
	I0920 18:47:04.506581  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0920 18:47:04.506673  762988 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:47:04.510814  762988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:47:04.510861  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:47:05.839638  762988 crio.go:462] duration metric: took 1.33298536s to copy over tarball
	I0920 18:47:05.839723  762988 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:47:07.786766  762988 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.947011448s)
	I0920 18:47:07.786795  762988 crio.go:469] duration metric: took 1.947128446s to extract the tarball
	I0920 18:47:07.786805  762988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:47:07.822913  762988 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:47:07.866552  762988 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:47:07.866583  762988 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:47:07.866592  762988 kubeadm.go:934] updating node { 192.168.39.149 8443 v1.31.1 crio true true} ...
	I0920 18:47:07.866704  762988 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:47:07.866781  762988 ssh_runner.go:195] Run: crio config
	I0920 18:47:07.918540  762988 cni.go:84] Creating CNI manager for ""
	I0920 18:47:07.918563  762988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 18:47:07.918573  762988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:47:07.918597  762988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-525790 NodeName:ha-525790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:47:07.918730  762988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-525790"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:47:07.918753  762988 kube-vip.go:115] generating kube-vip config ...
	I0920 18:47:07.918798  762988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:47:07.936288  762988 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:47:07.936429  762988 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:47:07.936497  762988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:47:07.945867  762988 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:47:07.945940  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 18:47:07.955191  762988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 18:47:07.971064  762988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:47:07.986880  762988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:47:08.002662  762988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0920 18:47:08.019579  762988 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:47:08.023552  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:47:08.035218  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:47:08.170218  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:47:08.187527  762988 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.149
	I0920 18:47:08.187547  762988 certs.go:194] generating shared ca certs ...
	I0920 18:47:08.187568  762988 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.187793  762988 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:47:08.187883  762988 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:47:08.187899  762988 certs.go:256] generating profile certs ...
	I0920 18:47:08.187973  762988 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:47:08.187993  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt with IP's: []
	I0920 18:47:08.272186  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt ...
	I0920 18:47:08.272216  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt: {Name:mk7bd0f4b5267ef296fffaf22c63ade5f9317aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.272387  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key ...
	I0920 18:47:08.272398  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key: {Name:mk8397cc62a5b5fd0095d7257df95debaa0a3c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.272479  762988 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.39888826
	I0920 18:47:08.272493  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.39888826 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.254]
	I0920 18:47:08.448019  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.39888826 ...
	I0920 18:47:08.448049  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.39888826: {Name:mk46ff6887950fec6d616a29dc6bce205118977d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.448240  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.39888826 ...
	I0920 18:47:08.448262  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.39888826: {Name:mk9b06f9440d087fb58cd5f31657e72732704a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.448360  762988 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.39888826 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:47:08.448487  762988 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.39888826 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:47:08.448573  762988 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:47:08.448592  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt with IP's: []
	I0920 18:47:08.547781  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt ...
	I0920 18:47:08.547811  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt: {Name:mk5f440c35d9494faae93b7f24e431b15c93d038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.547991  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key ...
	I0920 18:47:08.548027  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key: {Name:mk1af5a674ecd36547ebff165e719d66a8eaf2a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.548154  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:47:08.548179  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:47:08.548198  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:47:08.548217  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:47:08.548234  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:47:08.548251  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:47:08.548270  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:47:08.548288  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:47:08.548368  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:47:08.548419  762988 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:47:08.548433  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:47:08.548468  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:47:08.548498  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:47:08.548526  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:47:08.548582  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:47:08.548616  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:47:08.548636  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:47:08.548655  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:08.549274  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:47:08.575606  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:47:08.599030  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:47:08.622271  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:47:08.645192  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:47:08.668189  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:47:08.691174  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:47:08.714332  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:47:08.737751  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:47:08.760383  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:47:08.783502  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:47:08.806863  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:47:08.822981  762988 ssh_runner.go:195] Run: openssl version
	I0920 18:47:08.828850  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:47:08.839624  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:47:08.844261  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:47:08.844324  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:47:08.850299  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:47:08.860928  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:47:08.871606  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:47:08.876264  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:47:08.876328  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:47:08.882105  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:47:08.892622  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:47:08.903139  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:08.907653  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:08.907717  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:08.913362  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:47:08.923853  762988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:47:08.927915  762988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:47:08.927964  762988 kubeadm.go:392] StartCluster: {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:47:08.928033  762988 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:47:08.928074  762988 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:47:08.975658  762988 cri.go:89] found id: ""
	I0920 18:47:08.975731  762988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:47:08.987853  762988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:47:09.001997  762988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:47:09.015239  762988 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:47:09.015263  762988 kubeadm.go:157] found existing configuration files:
	
	I0920 18:47:09.015328  762988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:47:09.024322  762988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:47:09.024391  762988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:47:09.033789  762988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:47:09.042729  762988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:47:09.042806  762988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:47:09.052389  762988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:47:09.061397  762988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:47:09.061452  762988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:47:09.070628  762988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:47:09.079481  762988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:47:09.079574  762988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:47:09.088812  762988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:47:09.197025  762988 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:47:09.197195  762988 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:47:09.302732  762988 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:47:09.302875  762988 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:47:09.303013  762988 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:47:09.313100  762988 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:47:09.315042  762988 out.go:235]   - Generating certificates and keys ...
	I0920 18:47:09.315126  762988 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:47:09.315194  762988 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:47:09.561066  762988 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:47:09.701075  762988 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:47:09.963251  762988 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:47:10.218874  762988 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:47:10.374815  762988 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:47:10.375019  762988 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-525790 localhost] and IPs [192.168.39.149 127.0.0.1 ::1]
	I0920 18:47:10.536783  762988 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:47:10.536945  762988 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-525790 localhost] and IPs [192.168.39.149 127.0.0.1 ::1]
	I0920 18:47:10.653048  762988 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:47:10.817540  762988 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:47:11.052072  762988 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:47:11.052166  762988 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:47:11.275604  762988 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:47:11.340320  762988 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:47:11.606513  762988 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:47:11.722778  762988 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:47:11.939356  762988 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:47:11.939850  762988 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:47:11.942972  762988 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:47:11.945229  762988 out.go:235]   - Booting up control plane ...
	I0920 18:47:11.945356  762988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:47:11.945485  762988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:47:11.945574  762988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:47:11.961277  762988 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:47:11.967235  762988 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:47:11.967294  762988 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:47:12.103452  762988 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:47:12.103652  762988 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:47:12.605055  762988 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.510324ms
	I0920 18:47:12.605178  762988 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:47:18.584157  762988 kubeadm.go:310] [api-check] The API server is healthy after 5.978671976s
	I0920 18:47:18.596695  762988 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:47:19.113972  762988 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:47:19.144976  762988 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:47:19.145190  762988 kubeadm.go:310] [mark-control-plane] Marking the node ha-525790 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:47:19.157610  762988 kubeadm.go:310] [bootstrap-token] Using token: qd32pn.8pqkvbtlqp80l6sb
	I0920 18:47:19.159113  762988 out.go:235]   - Configuring RBAC rules ...
	I0920 18:47:19.159238  762988 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:47:19.164190  762988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:47:19.177203  762988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:47:19.185189  762988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:47:19.189876  762988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:47:19.193529  762988 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:47:19.311685  762988 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:47:19.754352  762988 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:47:20.310973  762988 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:47:20.311943  762988 kubeadm.go:310] 
	I0920 18:47:20.312030  762988 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:47:20.312039  762988 kubeadm.go:310] 
	I0920 18:47:20.312140  762988 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:47:20.312149  762988 kubeadm.go:310] 
	I0920 18:47:20.312178  762988 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:47:20.312290  762988 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:47:20.312369  762988 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:47:20.312380  762988 kubeadm.go:310] 
	I0920 18:47:20.312430  762988 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:47:20.312442  762988 kubeadm.go:310] 
	I0920 18:47:20.312481  762988 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:47:20.312487  762988 kubeadm.go:310] 
	I0920 18:47:20.312536  762988 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:47:20.312615  762988 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:47:20.312715  762988 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:47:20.312735  762988 kubeadm.go:310] 
	I0920 18:47:20.312856  762988 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:47:20.312961  762988 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:47:20.312973  762988 kubeadm.go:310] 
	I0920 18:47:20.313079  762988 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qd32pn.8pqkvbtlqp80l6sb \
	I0920 18:47:20.313228  762988 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d \
	I0920 18:47:20.313262  762988 kubeadm.go:310] 	--control-plane 
	I0920 18:47:20.313271  762988 kubeadm.go:310] 
	I0920 18:47:20.313383  762988 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:47:20.313397  762988 kubeadm.go:310] 
	I0920 18:47:20.313513  762988 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qd32pn.8pqkvbtlqp80l6sb \
	I0920 18:47:20.313639  762988 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d 
	I0920 18:47:20.314670  762988 kubeadm.go:310] W0920 18:47:09.152542     827 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:47:20.315023  762988 kubeadm.go:310] W0920 18:47:09.153465     827 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:47:20.315172  762988 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:47:20.315210  762988 cni.go:84] Creating CNI manager for ""
	I0920 18:47:20.315225  762988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 18:47:20.317188  762988 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 18:47:20.318757  762988 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 18:47:20.324392  762988 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 18:47:20.324411  762988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 18:47:20.347801  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 18:47:20.735995  762988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:47:20.736093  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:20.736105  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-525790 minikube.k8s.io/updated_at=2024_09_20T18_47_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=ha-525790 minikube.k8s.io/primary=true
	I0920 18:47:20.761909  762988 ops.go:34] apiserver oom_adj: -16
	I0920 18:47:20.876678  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:21.377092  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:21.876896  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:22.377010  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:22.877069  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:23.377474  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:23.877640  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:24.377768  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:24.504008  762988 kubeadm.go:1113] duration metric: took 3.76800228s to wait for elevateKubeSystemPrivileges
	I0920 18:47:24.504045  762988 kubeadm.go:394] duration metric: took 15.576084363s to StartCluster
	I0920 18:47:24.504070  762988 settings.go:142] acquiring lock: {Name:mk0bd1e421bf437575c076c52c1ff2f74497a1ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:24.504282  762988 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:47:24.505108  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/kubeconfig: {Name:mk275c54cf52b0ccdc22fcaa39c7b9c31092c648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:24.505342  762988 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:47:24.505366  762988 start.go:241] waiting for startup goroutines ...
	I0920 18:47:24.505366  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:47:24.505382  762988 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:47:24.505468  762988 addons.go:69] Setting storage-provisioner=true in profile "ha-525790"
	I0920 18:47:24.505483  762988 addons.go:69] Setting default-storageclass=true in profile "ha-525790"
	I0920 18:47:24.505492  762988 addons.go:234] Setting addon storage-provisioner=true in "ha-525790"
	I0920 18:47:24.505509  762988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-525790"
	I0920 18:47:24.505524  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:47:24.505571  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:24.505974  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.506023  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.506141  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.506249  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.522502  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41623
	I0920 18:47:24.522534  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38073
	I0920 18:47:24.522991  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.523040  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.523523  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.523546  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.523666  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.523684  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.523961  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.524077  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.524239  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:24.524629  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.524696  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.526413  762988 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:47:24.526810  762988 kapi.go:59] client config for ha-525790: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key", CAFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 18:47:24.527471  762988 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 18:47:24.527819  762988 addons.go:234] Setting addon default-storageclass=true in "ha-525790"
	I0920 18:47:24.527875  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:47:24.528265  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.528313  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.542871  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0920 18:47:24.543236  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0920 18:47:24.543494  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.543587  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.544071  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.544093  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.544229  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.544255  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.544432  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.544641  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.544640  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:24.545205  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.545253  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.546391  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:24.548710  762988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:47:24.550144  762988 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:47:24.550165  762988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:47:24.550186  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:24.553367  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:24.553828  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:24.553854  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:24.553998  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:24.554216  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:24.554440  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:24.554622  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:24.561549  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37321
	I0920 18:47:24.561966  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.562494  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.562519  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.562876  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.563072  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:24.564587  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:24.564814  762988 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:47:24.564831  762988 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:47:24.564849  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:24.567687  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:24.568171  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:24.568193  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:24.568319  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:24.568510  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:24.568703  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:24.568857  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:24.656392  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:47:24.815217  762988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:47:24.828379  762988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:47:25.253619  762988 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 18:47:25.464741  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.464767  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.464846  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.464869  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.465054  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.465071  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.465081  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.465089  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.465214  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.465241  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.465251  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.465258  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.465320  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.465336  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.465344  762988 main.go:141] libmachine: (ha-525790) DBG | Closing plugin on server side
	I0920 18:47:25.465497  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.465514  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.465592  762988 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 18:47:25.465620  762988 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 18:47:25.465728  762988 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0920 18:47:25.465739  762988 round_trippers.go:469] Request Headers:
	I0920 18:47:25.465759  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:47:25.465768  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:47:25.475780  762988 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0920 18:47:25.476328  762988 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0920 18:47:25.476346  762988 round_trippers.go:469] Request Headers:
	I0920 18:47:25.476353  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:47:25.476356  762988 round_trippers.go:473]     Content-Type: application/json
	I0920 18:47:25.476359  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:47:25.478464  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:47:25.478670  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.478686  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.479015  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.479056  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.479019  762988 main.go:141] libmachine: (ha-525790) DBG | Closing plugin on server side
	I0920 18:47:25.480685  762988 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 18:47:25.481832  762988 addons.go:510] duration metric: took 976.454814ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 18:47:25.481877  762988 start.go:246] waiting for cluster config update ...
	I0920 18:47:25.481891  762988 start.go:255] writing updated cluster config ...
	I0920 18:47:25.483450  762988 out.go:201] 
	I0920 18:47:25.484717  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:25.484795  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:47:25.486329  762988 out.go:177] * Starting "ha-525790-m02" control-plane node in "ha-525790" cluster
	I0920 18:47:25.487492  762988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:47:25.487516  762988 cache.go:56] Caching tarball of preloaded images
	I0920 18:47:25.487633  762988 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:47:25.487647  762988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:47:25.487721  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:47:25.487913  762988 start.go:360] acquireMachinesLock for ha-525790-m02: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:47:25.487963  762988 start.go:364] duration metric: took 29.413µs to acquireMachinesLock for "ha-525790-m02"
	I0920 18:47:25.487982  762988 start.go:93] Provisioning new machine with config: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:47:25.488070  762988 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0920 18:47:25.489602  762988 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:47:25.489710  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:25.489745  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:25.504741  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0920 18:47:25.505176  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:25.505735  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:25.505756  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:25.506114  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:25.506304  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetMachineName
	I0920 18:47:25.506440  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:25.506586  762988 start.go:159] libmachine.API.Create for "ha-525790" (driver="kvm2")
	I0920 18:47:25.506620  762988 client.go:168] LocalClient.Create starting
	I0920 18:47:25.506658  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:47:25.506697  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:47:25.506717  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:47:25.506786  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:47:25.506825  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:47:25.506864  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:47:25.506891  762988 main.go:141] libmachine: Running pre-create checks...
	I0920 18:47:25.506903  762988 main.go:141] libmachine: (ha-525790-m02) Calling .PreCreateCheck
	I0920 18:47:25.507083  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetConfigRaw
	I0920 18:47:25.507514  762988 main.go:141] libmachine: Creating machine...
	I0920 18:47:25.507530  762988 main.go:141] libmachine: (ha-525790-m02) Calling .Create
	I0920 18:47:25.507681  762988 main.go:141] libmachine: (ha-525790-m02) Creating KVM machine...
	I0920 18:47:25.508920  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found existing default KVM network
	I0920 18:47:25.509048  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found existing private KVM network mk-ha-525790
	I0920 18:47:25.509185  762988 main.go:141] libmachine: (ha-525790-m02) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02 ...
	I0920 18:47:25.509201  762988 main.go:141] libmachine: (ha-525790-m02) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:47:25.509310  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:25.509191  763373 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:47:25.509384  762988 main.go:141] libmachine: (ha-525790-m02) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:47:25.810758  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:25.810588  763373 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa...
	I0920 18:47:26.052474  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:26.052313  763373 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/ha-525790-m02.rawdisk...
	I0920 18:47:26.052509  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Writing magic tar header
	I0920 18:47:26.052523  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Writing SSH key tar header
	I0920 18:47:26.052535  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:26.052440  763373 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02 ...
	I0920 18:47:26.052629  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02
	I0920 18:47:26.052676  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:47:26.052691  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02 (perms=drwx------)
	I0920 18:47:26.052705  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:47:26.052718  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:47:26.052738  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:47:26.052758  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:47:26.052768  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:47:26.052788  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:47:26.052797  762988 main.go:141] libmachine: (ha-525790-m02) Creating domain...
	I0920 18:47:26.052815  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:47:26.052826  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:47:26.052837  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:47:26.052849  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home
	I0920 18:47:26.052861  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Skipping /home - not owner
	I0920 18:47:26.053670  762988 main.go:141] libmachine: (ha-525790-m02) define libvirt domain using xml: 
	I0920 18:47:26.053692  762988 main.go:141] libmachine: (ha-525790-m02) <domain type='kvm'>
	I0920 18:47:26.053711  762988 main.go:141] libmachine: (ha-525790-m02)   <name>ha-525790-m02</name>
	I0920 18:47:26.053719  762988 main.go:141] libmachine: (ha-525790-m02)   <memory unit='MiB'>2200</memory>
	I0920 18:47:26.053731  762988 main.go:141] libmachine: (ha-525790-m02)   <vcpu>2</vcpu>
	I0920 18:47:26.053741  762988 main.go:141] libmachine: (ha-525790-m02)   <features>
	I0920 18:47:26.053752  762988 main.go:141] libmachine: (ha-525790-m02)     <acpi/>
	I0920 18:47:26.053761  762988 main.go:141] libmachine: (ha-525790-m02)     <apic/>
	I0920 18:47:26.053790  762988 main.go:141] libmachine: (ha-525790-m02)     <pae/>
	I0920 18:47:26.053810  762988 main.go:141] libmachine: (ha-525790-m02)     
	I0920 18:47:26.053820  762988 main.go:141] libmachine: (ha-525790-m02)   </features>
	I0920 18:47:26.053828  762988 main.go:141] libmachine: (ha-525790-m02)   <cpu mode='host-passthrough'>
	I0920 18:47:26.053841  762988 main.go:141] libmachine: (ha-525790-m02)   
	I0920 18:47:26.053848  762988 main.go:141] libmachine: (ha-525790-m02)   </cpu>
	I0920 18:47:26.053859  762988 main.go:141] libmachine: (ha-525790-m02)   <os>
	I0920 18:47:26.053883  762988 main.go:141] libmachine: (ha-525790-m02)     <type>hvm</type>
	I0920 18:47:26.053908  762988 main.go:141] libmachine: (ha-525790-m02)     <boot dev='cdrom'/>
	I0920 18:47:26.053933  762988 main.go:141] libmachine: (ha-525790-m02)     <boot dev='hd'/>
	I0920 18:47:26.053946  762988 main.go:141] libmachine: (ha-525790-m02)     <bootmenu enable='no'/>
	I0920 18:47:26.053958  762988 main.go:141] libmachine: (ha-525790-m02)   </os>
	I0920 18:47:26.053975  762988 main.go:141] libmachine: (ha-525790-m02)   <devices>
	I0920 18:47:26.053988  762988 main.go:141] libmachine: (ha-525790-m02)     <disk type='file' device='cdrom'>
	I0920 18:47:26.053999  762988 main.go:141] libmachine: (ha-525790-m02)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/boot2docker.iso'/>
	I0920 18:47:26.054008  762988 main.go:141] libmachine: (ha-525790-m02)       <target dev='hdc' bus='scsi'/>
	I0920 18:47:26.054017  762988 main.go:141] libmachine: (ha-525790-m02)       <readonly/>
	I0920 18:47:26.054026  762988 main.go:141] libmachine: (ha-525790-m02)     </disk>
	I0920 18:47:26.054036  762988 main.go:141] libmachine: (ha-525790-m02)     <disk type='file' device='disk'>
	I0920 18:47:26.054048  762988 main.go:141] libmachine: (ha-525790-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:47:26.054067  762988 main.go:141] libmachine: (ha-525790-m02)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/ha-525790-m02.rawdisk'/>
	I0920 18:47:26.054080  762988 main.go:141] libmachine: (ha-525790-m02)       <target dev='hda' bus='virtio'/>
	I0920 18:47:26.054092  762988 main.go:141] libmachine: (ha-525790-m02)     </disk>
	I0920 18:47:26.054102  762988 main.go:141] libmachine: (ha-525790-m02)     <interface type='network'>
	I0920 18:47:26.054113  762988 main.go:141] libmachine: (ha-525790-m02)       <source network='mk-ha-525790'/>
	I0920 18:47:26.054121  762988 main.go:141] libmachine: (ha-525790-m02)       <model type='virtio'/>
	I0920 18:47:26.054138  762988 main.go:141] libmachine: (ha-525790-m02)     </interface>
	I0920 18:47:26.054148  762988 main.go:141] libmachine: (ha-525790-m02)     <interface type='network'>
	I0920 18:47:26.054159  762988 main.go:141] libmachine: (ha-525790-m02)       <source network='default'/>
	I0920 18:47:26.054170  762988 main.go:141] libmachine: (ha-525790-m02)       <model type='virtio'/>
	I0920 18:47:26.054182  762988 main.go:141] libmachine: (ha-525790-m02)     </interface>
	I0920 18:47:26.054192  762988 main.go:141] libmachine: (ha-525790-m02)     <serial type='pty'>
	I0920 18:47:26.054202  762988 main.go:141] libmachine: (ha-525790-m02)       <target port='0'/>
	I0920 18:47:26.054210  762988 main.go:141] libmachine: (ha-525790-m02)     </serial>
	I0920 18:47:26.054226  762988 main.go:141] libmachine: (ha-525790-m02)     <console type='pty'>
	I0920 18:47:26.054239  762988 main.go:141] libmachine: (ha-525790-m02)       <target type='serial' port='0'/>
	I0920 18:47:26.054250  762988 main.go:141] libmachine: (ha-525790-m02)     </console>
	I0920 18:47:26.054260  762988 main.go:141] libmachine: (ha-525790-m02)     <rng model='virtio'>
	I0920 18:47:26.054269  762988 main.go:141] libmachine: (ha-525790-m02)       <backend model='random'>/dev/random</backend>
	I0920 18:47:26.054275  762988 main.go:141] libmachine: (ha-525790-m02)     </rng>
	I0920 18:47:26.054282  762988 main.go:141] libmachine: (ha-525790-m02)     
	I0920 18:47:26.054290  762988 main.go:141] libmachine: (ha-525790-m02)     
	I0920 18:47:26.054302  762988 main.go:141] libmachine: (ha-525790-m02)   </devices>
	I0920 18:47:26.054314  762988 main.go:141] libmachine: (ha-525790-m02) </domain>
	I0920 18:47:26.054327  762988 main.go:141] libmachine: (ha-525790-m02) 
	I0920 18:47:26.060630  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:c9:44:90 in network default
	I0920 18:47:26.061118  762988 main.go:141] libmachine: (ha-525790-m02) Ensuring networks are active...
	I0920 18:47:26.061136  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:26.061831  762988 main.go:141] libmachine: (ha-525790-m02) Ensuring network default is active
	I0920 18:47:26.062169  762988 main.go:141] libmachine: (ha-525790-m02) Ensuring network mk-ha-525790 is active
	I0920 18:47:26.062475  762988 main.go:141] libmachine: (ha-525790-m02) Getting domain xml...
	I0920 18:47:26.063135  762988 main.go:141] libmachine: (ha-525790-m02) Creating domain...
	I0920 18:47:27.281978  762988 main.go:141] libmachine: (ha-525790-m02) Waiting to get IP...
	I0920 18:47:27.282784  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:27.283239  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:27.283266  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:27.283218  763373 retry.go:31] will retry after 308.177361ms: waiting for machine to come up
	I0920 18:47:27.592590  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:27.593066  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:27.593096  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:27.593029  763373 retry.go:31] will retry after 320.236434ms: waiting for machine to come up
	I0920 18:47:27.914511  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:27.914888  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:27.914914  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:27.914871  763373 retry.go:31] will retry after 467.681075ms: waiting for machine to come up
	I0920 18:47:28.384709  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:28.385145  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:28.385176  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:28.385093  763373 retry.go:31] will retry after 475.809922ms: waiting for machine to come up
	I0920 18:47:28.862677  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:28.863104  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:28.863166  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:28.863088  763373 retry.go:31] will retry after 752.437443ms: waiting for machine to come up
	I0920 18:47:29.616869  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:29.617208  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:29.617236  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:29.617153  763373 retry.go:31] will retry after 885.836184ms: waiting for machine to come up
	I0920 18:47:30.505116  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:30.505517  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:30.505574  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:30.505468  763373 retry.go:31] will retry after 963.771364ms: waiting for machine to come up
	I0920 18:47:31.470533  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:31.470960  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:31.470987  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:31.470922  763373 retry.go:31] will retry after 1.119790188s: waiting for machine to come up
	I0920 18:47:32.592108  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:32.592570  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:32.592610  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:32.592526  763373 retry.go:31] will retry after 1.532725085s: waiting for machine to come up
	I0920 18:47:34.127220  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:34.127626  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:34.127659  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:34.127555  763373 retry.go:31] will retry after 1.862816679s: waiting for machine to come up
	I0920 18:47:35.991806  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:35.992125  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:35.992154  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:35.992071  763373 retry.go:31] will retry after 2.15065243s: waiting for machine to come up
	I0920 18:47:38.145444  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:38.145875  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:38.145907  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:38.145806  763373 retry.go:31] will retry after 3.304630599s: waiting for machine to come up
	I0920 18:47:41.451734  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:41.452111  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:41.452140  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:41.452065  763373 retry.go:31] will retry after 3.579286099s: waiting for machine to come up
	I0920 18:47:45.035810  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:45.036306  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:45.036331  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:45.036255  763373 retry.go:31] will retry after 4.166411475s: waiting for machine to come up
	I0920 18:47:49.204465  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.205113  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has current primary IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.205136  762988 main.go:141] libmachine: (ha-525790-m02) Found IP for machine: 192.168.39.246
	I0920 18:47:49.205146  762988 main.go:141] libmachine: (ha-525790-m02) Reserving static IP address...
	I0920 18:47:49.205644  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find host DHCP lease matching {name: "ha-525790-m02", mac: "52:54:00:da:aa:a2", ip: "192.168.39.246"} in network mk-ha-525790
	I0920 18:47:49.279479  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Getting to WaitForSSH function...
	I0920 18:47:49.279570  762988 main.go:141] libmachine: (ha-525790-m02) Reserved static IP address: 192.168.39.246
	I0920 18:47:49.279586  762988 main.go:141] libmachine: (ha-525790-m02) Waiting for SSH to be available...
	I0920 18:47:49.282091  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.282697  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.282724  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.282939  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Using SSH client type: external
	I0920 18:47:49.282962  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa (-rw-------)
	I0920 18:47:49.283009  762988 main.go:141] libmachine: (ha-525790-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:47:49.283028  762988 main.go:141] libmachine: (ha-525790-m02) DBG | About to run SSH command:
	I0920 18:47:49.283043  762988 main.go:141] libmachine: (ha-525790-m02) DBG | exit 0
	I0920 18:47:49.406686  762988 main.go:141] libmachine: (ha-525790-m02) DBG | SSH cmd err, output: <nil>: 
	I0920 18:47:49.406894  762988 main.go:141] libmachine: (ha-525790-m02) KVM machine creation complete!
	I0920 18:47:49.407253  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetConfigRaw
	I0920 18:47:49.407921  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:49.408101  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:49.408280  762988 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:47:49.408299  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetState
	I0920 18:47:49.409531  762988 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:47:49.409549  762988 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:47:49.409556  762988 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:47:49.409565  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.411929  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.412327  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.412357  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.412422  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:49.412599  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.412798  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.412930  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:49.413134  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:49.413339  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:49.413349  762988 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:47:49.514173  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:47:49.514209  762988 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:47:49.514222  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.516963  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.517420  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.517450  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.517591  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:49.517799  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.517980  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.518113  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:49.518250  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:49.518433  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:49.518443  762988 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:47:49.619473  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:47:49.619576  762988 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:47:49.619587  762988 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:47:49.619599  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetMachineName
	I0920 18:47:49.619832  762988 buildroot.go:166] provisioning hostname "ha-525790-m02"
	I0920 18:47:49.619860  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetMachineName
	I0920 18:47:49.620048  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.622596  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.622960  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.622986  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.623162  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:49.623347  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.623512  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.623614  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:49.623826  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:49.624053  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:49.624072  762988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790-m02 && echo "ha-525790-m02" | sudo tee /etc/hostname
	I0920 18:47:49.741686  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790-m02
	
	I0920 18:47:49.741719  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.744162  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.744537  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.744566  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.744764  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:49.744977  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.745123  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.745246  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:49.745415  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:49.745636  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:49.745654  762988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:47:49.861819  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:47:49.861869  762988 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:47:49.861890  762988 buildroot.go:174] setting up certificates
	I0920 18:47:49.861903  762988 provision.go:84] configureAuth start
	I0920 18:47:49.861915  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetMachineName
	I0920 18:47:49.862237  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 18:47:49.864787  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.865160  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.865188  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.865324  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.867360  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.867673  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.867699  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.867911  762988 provision.go:143] copyHostCerts
	I0920 18:47:49.867938  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:47:49.867981  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:47:49.867990  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:47:49.868053  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:47:49.868121  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:47:49.868140  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:47:49.868144  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:47:49.868168  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:47:49.868256  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:47:49.868279  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:47:49.868285  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:47:49.868309  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:47:49.868354  762988 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790-m02 san=[127.0.0.1 192.168.39.246 ha-525790-m02 localhost minikube]
	I0920 18:47:50.026326  762988 provision.go:177] copyRemoteCerts
	I0920 18:47:50.026387  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:47:50.026413  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.029067  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.029469  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.029558  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.029689  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.029875  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.030065  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.030209  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:47:50.113429  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:47:50.113512  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:47:50.138381  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:47:50.138457  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:47:50.162199  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:47:50.162285  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:47:50.185945  762988 provision.go:87] duration metric: took 324.027275ms to configureAuth
	I0920 18:47:50.185972  762988 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:47:50.186148  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:50.186225  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.190079  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.190492  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.190513  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.190710  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.190964  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.191145  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.191294  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.191424  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:50.191588  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:50.191602  762988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:47:50.416583  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:47:50.416624  762988 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:47:50.416631  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetURL
	I0920 18:47:50.417912  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Using libvirt version 6000000
	I0920 18:47:50.420017  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.420424  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.420454  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.420641  762988 main.go:141] libmachine: Docker is up and running!
	I0920 18:47:50.420664  762988 main.go:141] libmachine: Reticulating splines...
	I0920 18:47:50.420672  762988 client.go:171] duration metric: took 24.914041264s to LocalClient.Create
	I0920 18:47:50.420699  762988 start.go:167] duration metric: took 24.914113541s to libmachine.API.Create "ha-525790"
	I0920 18:47:50.420712  762988 start.go:293] postStartSetup for "ha-525790-m02" (driver="kvm2")
	I0920 18:47:50.420726  762988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:47:50.420744  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.420995  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:47:50.421029  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.423161  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.423420  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.423447  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.423594  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.423797  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.423953  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.424081  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:47:50.505401  762988 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:47:50.510220  762988 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:47:50.510246  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:47:50.510332  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:47:50.510417  762988 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:47:50.510429  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:47:50.510527  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:47:50.520201  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:47:50.544692  762988 start.go:296] duration metric: took 123.962986ms for postStartSetup
	I0920 18:47:50.544747  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetConfigRaw
	I0920 18:47:50.545353  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 18:47:50.548132  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.548490  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.548517  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.548850  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:47:50.549085  762988 start.go:128] duration metric: took 25.06099769s to createHost
	I0920 18:47:50.549116  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.551581  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.551997  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.552025  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.552177  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.552377  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.552543  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.552681  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.552832  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:50.553008  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:50.553021  762988 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:47:50.655701  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858070.610915334
	
	I0920 18:47:50.655725  762988 fix.go:216] guest clock: 1726858070.610915334
	I0920 18:47:50.655734  762988 fix.go:229] Guest: 2024-09-20 18:47:50.610915334 +0000 UTC Remote: 2024-09-20 18:47:50.549100081 +0000 UTC m=+71.798161303 (delta=61.815253ms)
	I0920 18:47:50.655756  762988 fix.go:200] guest clock delta is within tolerance: 61.815253ms
	I0920 18:47:50.655762  762988 start.go:83] releasing machines lock for "ha-525790-m02", held for 25.167790601s
	I0920 18:47:50.655785  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.656107  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 18:47:50.658651  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.659046  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.659073  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.661685  762988 out.go:177] * Found network options:
	I0920 18:47:50.663168  762988 out.go:177]   - NO_PROXY=192.168.39.149
	W0920 18:47:50.664561  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:47:50.664590  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.665196  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.665478  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.665602  762988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:47:50.665662  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	W0920 18:47:50.665708  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:47:50.665796  762988 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:47:50.665818  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.668764  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.668800  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.669194  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.669220  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.669246  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.669261  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.669369  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.669464  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.669573  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.669655  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.669713  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.669774  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.669844  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:47:50.669922  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:47:50.909505  762988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:47:50.915357  762988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:47:50.915439  762988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:47:50.932184  762988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:47:50.932206  762988 start.go:495] detecting cgroup driver to use...
	I0920 18:47:50.932266  762988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:47:50.948362  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:47:50.962800  762988 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:47:50.962889  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:47:50.976893  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:47:50.992982  762988 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:47:51.118282  762988 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:47:51.256995  762988 docker.go:233] disabling docker service ...
	I0920 18:47:51.257080  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:47:51.271445  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:47:51.284437  762988 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:47:51.427984  762988 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:47:51.540460  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:47:51.554587  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:47:51.573609  762988 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:47:51.573684  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.583854  762988 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:47:51.583919  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.594247  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.604465  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.614547  762988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:47:51.624622  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.634811  762988 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.651778  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.661817  762988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:47:51.670752  762988 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:47:51.670816  762988 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:47:51.683631  762988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:47:51.692558  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:47:51.804846  762988 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:47:51.893367  762988 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:47:51.893448  762988 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:47:51.898101  762988 start.go:563] Will wait 60s for crictl version
	I0920 18:47:51.898148  762988 ssh_runner.go:195] Run: which crictl
	I0920 18:47:51.901983  762988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:47:51.945514  762988 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:47:51.945611  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:47:51.973141  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:47:52.003666  762988 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:47:52.005189  762988 out.go:177]   - env NO_PROXY=192.168.39.149
	I0920 18:47:52.006445  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 18:47:52.008892  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:52.009199  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:52.009224  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:52.009410  762988 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:47:52.013674  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:47:52.025912  762988 mustload.go:65] Loading cluster: ha-525790
	I0920 18:47:52.026090  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:52.026337  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:52.026371  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:52.041555  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
	I0920 18:47:52.042164  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:52.042654  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:52.042674  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:52.043081  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:52.043293  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:52.044999  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:47:52.045304  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:52.045340  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:52.060489  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44511
	I0920 18:47:52.060988  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:52.061514  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:52.061548  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:52.061872  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:52.062063  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:52.062249  762988 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.246
	I0920 18:47:52.062265  762988 certs.go:194] generating shared ca certs ...
	I0920 18:47:52.062284  762988 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:52.062496  762988 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:47:52.062557  762988 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:47:52.062572  762988 certs.go:256] generating profile certs ...
	I0920 18:47:52.062674  762988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:47:52.062712  762988 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.b06313b5
	I0920 18:47:52.062734  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.b06313b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.246 192.168.39.254]
	I0920 18:47:52.367330  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.b06313b5 ...
	I0920 18:47:52.367365  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.b06313b5: {Name:mka76a58a80092d1cbec495d718f7bdea16bb00c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:52.367534  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.b06313b5 ...
	I0920 18:47:52.367547  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.b06313b5: {Name:mkf8231ebc436432da2597e17792d752485bca58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:52.367622  762988 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.b06313b5 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:47:52.367755  762988 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.b06313b5 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:47:52.367883  762988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:47:52.367899  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:47:52.367912  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:47:52.367926  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:47:52.367938  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:47:52.367950  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:47:52.367961  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:47:52.367973  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:47:52.367983  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:47:52.368035  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:47:52.368066  762988 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:47:52.368075  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:47:52.368096  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:47:52.368117  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:47:52.368141  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:47:52.368184  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:47:52.368212  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:52.368225  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:47:52.368237  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:47:52.368269  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:52.371227  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:52.371645  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:52.371674  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:52.371783  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:52.371999  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:52.372168  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:52.372324  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:52.443286  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 18:47:52.448837  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 18:47:52.460311  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 18:47:52.464490  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0920 18:47:52.475983  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 18:47:52.480213  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 18:47:52.494615  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 18:47:52.499007  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0920 18:47:52.508955  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 18:47:52.516124  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 18:47:52.526659  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 18:47:52.530903  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 18:47:52.541062  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:47:52.569451  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:47:52.592930  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:47:52.616256  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:47:52.639385  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 18:47:52.662394  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:47:52.686445  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:47:52.710153  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:47:52.734191  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:47:52.757258  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:47:52.780903  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:47:52.804939  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 18:47:52.821362  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0920 18:47:52.837317  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 18:47:52.853233  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0920 18:47:52.869254  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 18:47:52.885005  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 18:47:52.900806  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 18:47:52.917027  762988 ssh_runner.go:195] Run: openssl version
	I0920 18:47:52.922702  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:47:52.933000  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:52.937464  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:52.937523  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:52.943170  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:47:52.953509  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:47:52.964038  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:47:52.968718  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:47:52.968771  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:47:52.974378  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:47:52.984752  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:47:52.994888  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:47:52.999311  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:47:52.999370  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:47:53.005001  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:47:53.015691  762988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:47:53.019635  762988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:47:53.019692  762988 kubeadm.go:934] updating node {m02 192.168.39.246 8443 v1.31.1 crio true true} ...
	I0920 18:47:53.019793  762988 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:47:53.019822  762988 kube-vip.go:115] generating kube-vip config ...
	I0920 18:47:53.019860  762988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:47:53.036153  762988 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:47:53.036237  762988 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:47:53.036305  762988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:47:53.046004  762988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 18:47:53.046062  762988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 18:47:53.055936  762988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 18:47:53.055979  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:47:53.056005  762988 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0920 18:47:53.056053  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:47:53.056076  762988 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0920 18:47:53.060289  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 18:47:53.060315  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 18:47:53.789944  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:47:53.790047  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:47:53.795156  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 18:47:53.795193  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 18:47:53.889636  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:47:53.918466  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:47:53.918585  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:47:53.930311  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 18:47:53.930362  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 18:47:54.378013  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 18:47:54.388156  762988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:47:54.404650  762988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:47:54.420945  762988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:47:54.437522  762988 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:47:54.441369  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:47:54.453920  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:47:54.571913  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:47:54.589386  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:47:54.589919  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:54.589985  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:54.605308  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39785
	I0920 18:47:54.605924  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:54.606447  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:54.606470  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:54.606870  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:54.607082  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:54.607245  762988 start.go:317] joinCluster: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:47:54.607339  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 18:47:54.607355  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:54.610593  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:54.611156  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:54.611186  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:54.611363  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:54.611536  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:54.611703  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:54.611875  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:54.765700  762988 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:47:54.765757  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fgipyq.kw78xdqejinofgh1 --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-525790-m02 --control-plane --apiserver-advertise-address=192.168.39.246 --apiserver-bind-port=8443"
	I0920 18:48:15.991126  762988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fgipyq.kw78xdqejinofgh1 --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-525790-m02 --control-plane --apiserver-advertise-address=192.168.39.246 --apiserver-bind-port=8443": (21.225342383s)
	I0920 18:48:15.991161  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 18:48:16.566701  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-525790-m02 minikube.k8s.io/updated_at=2024_09_20T18_48_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=ha-525790 minikube.k8s.io/primary=false
	I0920 18:48:16.719509  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-525790-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 18:48:16.847244  762988 start.go:319] duration metric: took 22.239995563s to joinCluster
	I0920 18:48:16.847322  762988 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:48:16.847615  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:48:16.849000  762988 out.go:177] * Verifying Kubernetes components...
	I0920 18:48:16.850372  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:48:17.092103  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:48:17.120788  762988 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:48:17.121173  762988 kapi.go:59] client config for ha-525790: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key", CAFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 18:48:17.121271  762988 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.149:8443
	I0920 18:48:17.121564  762988 node_ready.go:35] waiting up to 6m0s for node "ha-525790-m02" to be "Ready" ...
	I0920 18:48:17.121729  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:17.121741  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:17.121752  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:17.121758  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:17.132247  762988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 18:48:17.622473  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:17.622504  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:17.622516  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:17.622523  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:17.625769  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:18.122399  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:18.122419  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:18.122427  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:18.122432  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:18.136165  762988 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 18:48:18.622000  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:18.622027  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:18.622037  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:18.622041  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:18.626792  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:19.122652  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:19.122677  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:19.122685  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:19.122691  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:19.125929  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:19.126379  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:19.622318  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:19.622339  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:19.622347  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:19.622351  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:19.625821  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:20.121842  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:20.121865  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:20.121874  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:20.121879  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:20.126973  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:48:20.622440  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:20.622464  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:20.622472  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:20.622476  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:20.625669  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:21.122479  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:21.122503  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:21.122514  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:21.122518  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:21.126309  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:21.127070  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:21.622431  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:21.622455  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:21.622464  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:21.622467  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:21.625353  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:22.122551  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:22.122577  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:22.122588  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:22.122594  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:22.130464  762988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 18:48:22.622444  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:22.622465  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:22.622473  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:22.622476  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:22.624966  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:23.121881  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:23.121906  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:23.121915  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:23.121918  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:23.126058  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:23.621933  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:23.621958  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:23.621967  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:23.621971  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:23.625609  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:23.626079  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:24.121954  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:24.121979  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:24.121986  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:24.121990  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:24.126296  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:24.622206  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:24.622229  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:24.622237  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:24.622241  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:24.625435  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:25.121906  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:25.121929  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:25.121937  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:25.121943  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:25.125410  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:25.622826  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:25.622865  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:25.622883  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:25.622888  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:25.626033  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:25.626689  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:26.121997  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:26.122029  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:26.122041  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:26.122047  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:26.126269  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:26.622175  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:26.622199  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:26.622207  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:26.622216  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:26.625403  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:27.122340  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:27.122371  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:27.122386  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:27.122391  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:27.126523  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:27.622670  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:27.622696  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:27.622708  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:27.622714  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:27.625864  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:28.121813  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:28.121839  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:28.121856  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:28.121861  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:28.127100  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:48:28.127893  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:28.622194  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:28.622218  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:28.622226  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:28.622231  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:28.625675  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:29.122510  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:29.122544  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:29.122556  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:29.122561  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:29.126584  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:29.622212  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:29.622230  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:29.622238  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:29.622242  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:29.625683  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:30.121899  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:30.121923  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:30.121931  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:30.121938  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:30.126500  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:30.622237  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:30.622262  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:30.622273  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:30.622282  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:30.625998  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:30.626739  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:31.122135  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:31.122162  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:31.122175  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:31.122180  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:31.126468  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:31.622529  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:31.622556  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:31.622568  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:31.622574  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:31.625581  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:32.122718  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:32.122743  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:32.122753  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:32.122758  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:32.126212  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:32.622048  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:32.622078  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:32.622090  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:32.622097  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:32.625566  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:33.122722  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:33.122748  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:33.122759  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:33.122766  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:33.125690  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:33.126429  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:33.622805  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:33.622839  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:33.622867  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:33.622874  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:33.626126  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:34.122562  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:34.122584  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.122593  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.122596  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.125490  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.126097  762988 node_ready.go:49] node "ha-525790-m02" has status "Ready":"True"
	I0920 18:48:34.126121  762988 node_ready.go:38] duration metric: took 17.004511153s for node "ha-525790-m02" to be "Ready" ...
	I0920 18:48:34.126132  762988 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:48:34.126214  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:34.126225  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.126235  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.126244  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.130332  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:34.136520  762988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.136636  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nfnkj
	I0920 18:48:34.136651  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.136659  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.136662  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.139356  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.140019  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.140035  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.140044  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.140050  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.142804  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.143520  762988 pod_ready.go:93] pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.143541  762988 pod_ready.go:82] duration metric: took 6.997099ms for pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.143552  762988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.143630  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rpcds
	I0920 18:48:34.143640  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.143650  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.143656  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.146528  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.147267  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.147282  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.147291  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.147298  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.149448  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.149863  762988 pod_ready.go:93] pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.149880  762988 pod_ready.go:82] duration metric: took 6.32048ms for pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.149890  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.149955  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790
	I0920 18:48:34.149964  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.149974  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.149982  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.152307  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.152827  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.152841  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.152848  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.152852  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.155039  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.155552  762988 pod_ready.go:93] pod "etcd-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.155568  762988 pod_ready.go:82] duration metric: took 5.670104ms for pod "etcd-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.155578  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.155636  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790-m02
	I0920 18:48:34.155646  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.155655  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.155660  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.157775  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.158230  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:34.158244  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.158252  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.158256  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.160455  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.161045  762988 pod_ready.go:93] pod "etcd-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.161062  762988 pod_ready.go:82] duration metric: took 5.476839ms for pod "etcd-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.161078  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.323482  762988 request.go:632] Waited for 162.335052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790
	I0920 18:48:34.323561  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790
	I0920 18:48:34.323567  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.323577  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.323596  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.327021  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:34.523234  762988 request.go:632] Waited for 195.376284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.523291  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.523297  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.523304  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.523308  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.526504  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:34.527263  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.527282  762988 pod_ready.go:82] duration metric: took 366.197667ms for pod "kube-apiserver-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.527291  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.722970  762988 request.go:632] Waited for 195.600109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m02
	I0920 18:48:34.723047  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m02
	I0920 18:48:34.723055  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.723066  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.723077  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.727681  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:34.922800  762988 request.go:632] Waited for 194.329492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:34.922877  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:34.922883  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.922890  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.922895  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.925710  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.926612  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.926641  762988 pod_ready.go:82] duration metric: took 399.342285ms for pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.926656  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.122660  762988 request.go:632] Waited for 195.882629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790
	I0920 18:48:35.122740  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790
	I0920 18:48:35.122749  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.122759  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.122770  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.126705  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:35.322726  762988 request.go:632] Waited for 195.293792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:35.322782  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:35.322787  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.322795  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.322800  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.326393  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:35.326918  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:35.326946  762988 pod_ready.go:82] duration metric: took 400.278191ms for pod "kube-controller-manager-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.326961  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.523401  762988 request.go:632] Waited for 196.343619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m02
	I0920 18:48:35.523471  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m02
	I0920 18:48:35.523481  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.523489  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.523496  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.526931  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:35.722974  762988 request.go:632] Waited for 195.371903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:35.723051  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:35.723062  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.723074  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.723083  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.726332  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:35.726861  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:35.726891  762988 pod_ready.go:82] duration metric: took 399.92136ms for pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.726906  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-958jz" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.922820  762988 request.go:632] Waited for 195.83508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-958jz
	I0920 18:48:35.922930  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-958jz
	I0920 18:48:35.922936  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.922947  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.922954  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.926053  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.123110  762988 request.go:632] Waited for 196.38428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:36.123185  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:36.123190  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.123198  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.123202  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.126954  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.127418  762988 pod_ready.go:93] pod "kube-proxy-958jz" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:36.127437  762988 pod_ready.go:82] duration metric: took 400.524478ms for pod "kube-proxy-958jz" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.127449  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sspfs" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.323527  762988 request.go:632] Waited for 195.98167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sspfs
	I0920 18:48:36.323598  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sspfs
	I0920 18:48:36.323607  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.323616  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.323622  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.327351  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.523422  762988 request.go:632] Waited for 195.381458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:36.523486  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:36.523492  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.523500  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.523509  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.526668  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.527360  762988 pod_ready.go:93] pod "kube-proxy-sspfs" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:36.527381  762988 pod_ready.go:82] duration metric: took 399.9242ms for pod "kube-proxy-sspfs" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.527392  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.723613  762988 request.go:632] Waited for 196.121297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790
	I0920 18:48:36.723676  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790
	I0920 18:48:36.723681  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.723690  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.723695  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.726896  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.922949  762988 request.go:632] Waited for 195.378354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:36.923034  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:36.923046  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.923061  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.923071  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.926320  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.926935  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:36.926956  762988 pod_ready.go:82] duration metric: took 399.558392ms for pod "kube-scheduler-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.926967  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:37.122901  762988 request.go:632] Waited for 195.82569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m02
	I0920 18:48:37.122982  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m02
	I0920 18:48:37.122988  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.122996  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.123003  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.126347  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:37.323372  762988 request.go:632] Waited for 196.406319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:37.323437  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:37.323442  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.323450  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.323457  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.326709  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:37.327455  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:37.327476  762988 pod_ready.go:82] duration metric: took 400.502746ms for pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:37.327489  762988 pod_ready.go:39] duration metric: took 3.201339533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:48:37.327504  762988 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:48:37.327555  762988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:48:37.343797  762988 api_server.go:72] duration metric: took 20.496433387s to wait for apiserver process to appear ...
	I0920 18:48:37.343829  762988 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:48:37.343854  762988 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0920 18:48:37.348107  762988 api_server.go:279] https://192.168.39.149:8443/healthz returned 200:
	ok
	I0920 18:48:37.348169  762988 round_trippers.go:463] GET https://192.168.39.149:8443/version
	I0920 18:48:37.348176  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.348184  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.348191  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.349126  762988 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 18:48:37.349250  762988 api_server.go:141] control plane version: v1.31.1
	I0920 18:48:37.349267  762988 api_server.go:131] duration metric: took 5.431776ms to wait for apiserver health ...
	I0920 18:48:37.349274  762988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:48:37.522627  762988 request.go:632] Waited for 173.275089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:37.522715  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:37.522723  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.522731  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.522738  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.528234  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:48:37.534123  762988 system_pods.go:59] 17 kube-system pods found
	I0920 18:48:37.534155  762988 system_pods.go:61] "coredns-7c65d6cfc9-nfnkj" [7994989d-6bfa-4d25-b7b7-662d2e6c742c] Running
	I0920 18:48:37.534161  762988 system_pods.go:61] "coredns-7c65d6cfc9-rpcds" [7db58219-7147-4a45-b233-ef3c698566ef] Running
	I0920 18:48:37.534171  762988 system_pods.go:61] "etcd-ha-525790" [f23cd40e-ac8d-451b-9bf9-2ef5d62ef4b6] Running
	I0920 18:48:37.534176  762988 system_pods.go:61] "etcd-ha-525790-m02" [5a29103e-6da3-40d1-be3c-58fdc0f28b54] Running
	I0920 18:48:37.534181  762988 system_pods.go:61] "kindnet-8glgp" [f462782e-1ff6-410a-8359-de3360d380b0] Running
	I0920 18:48:37.534186  762988 system_pods.go:61] "kindnet-9qbm6" [87e8ae18-a561-48ec-9835-27446b6917d3] Running
	I0920 18:48:37.534190  762988 system_pods.go:61] "kube-apiserver-ha-525790" [0e3563fd-5185-4dc6-8d9b-a7d954b96c8d] Running
	I0920 18:48:37.534195  762988 system_pods.go:61] "kube-apiserver-ha-525790-m02" [b3966e2e-ce3d-4916-b73c-0d80cd1793f0] Running
	I0920 18:48:37.534202  762988 system_pods.go:61] "kube-controller-manager-ha-525790" [1d695853-6a7e-487d-a52b-9aceb1fc9ff3] Running
	I0920 18:48:37.534210  762988 system_pods.go:61] "kube-controller-manager-ha-525790-m02" [090c1833-3800-4e13-b9a7-c03680f3d55d] Running
	I0920 18:48:37.534213  762988 system_pods.go:61] "kube-proxy-958jz" [46603403-eb82-4f15-a1da-da62194a072f] Running
	I0920 18:48:37.534216  762988 system_pods.go:61] "kube-proxy-sspfs" [15203515-fc45-4624-b97e-8ec247f01e2d] Running
	I0920 18:48:37.534221  762988 system_pods.go:61] "kube-scheduler-ha-525790" [8cb7e23e-c1d1-4753-9758-b17ef9fd08d7] Running
	I0920 18:48:37.534224  762988 system_pods.go:61] "kube-scheduler-ha-525790-m02" [dc9a5561-5d41-445d-a0ba-de3b2405f821] Running
	I0920 18:48:37.534228  762988 system_pods.go:61] "kube-vip-ha-525790" [0b318b1e-7a85-4c8c-8a5a-2fee226d7702] Running
	I0920 18:48:37.534231  762988 system_pods.go:61] "kube-vip-ha-525790-m02" [f2316231-5c1d-4bf2-ae62-5a4202b5818b] Running
	I0920 18:48:37.534234  762988 system_pods.go:61] "storage-provisioner" [ea6bf34f-c1f7-4216-a61f-be30846c991b] Running
	I0920 18:48:37.534241  762988 system_pods.go:74] duration metric: took 184.960329ms to wait for pod list to return data ...
	I0920 18:48:37.534252  762988 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:48:37.722639  762988 request.go:632] Waited for 188.265166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:48:37.722711  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:48:37.722717  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.722726  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.722730  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.726193  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:37.726449  762988 default_sa.go:45] found service account: "default"
	I0920 18:48:37.726469  762988 default_sa.go:55] duration metric: took 192.210022ms for default service account to be created ...
	I0920 18:48:37.726480  762988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:48:37.922955  762988 request.go:632] Waited for 196.382479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:37.923039  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:37.923050  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.923065  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.923072  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.927492  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:37.932712  762988 system_pods.go:86] 17 kube-system pods found
	I0920 18:48:37.932740  762988 system_pods.go:89] "coredns-7c65d6cfc9-nfnkj" [7994989d-6bfa-4d25-b7b7-662d2e6c742c] Running
	I0920 18:48:37.932746  762988 system_pods.go:89] "coredns-7c65d6cfc9-rpcds" [7db58219-7147-4a45-b233-ef3c698566ef] Running
	I0920 18:48:37.932750  762988 system_pods.go:89] "etcd-ha-525790" [f23cd40e-ac8d-451b-9bf9-2ef5d62ef4b6] Running
	I0920 18:48:37.932754  762988 system_pods.go:89] "etcd-ha-525790-m02" [5a29103e-6da3-40d1-be3c-58fdc0f28b54] Running
	I0920 18:48:37.932757  762988 system_pods.go:89] "kindnet-8glgp" [f462782e-1ff6-410a-8359-de3360d380b0] Running
	I0920 18:48:37.932761  762988 system_pods.go:89] "kindnet-9qbm6" [87e8ae18-a561-48ec-9835-27446b6917d3] Running
	I0920 18:48:37.932765  762988 system_pods.go:89] "kube-apiserver-ha-525790" [0e3563fd-5185-4dc6-8d9b-a7d954b96c8d] Running
	I0920 18:48:37.932769  762988 system_pods.go:89] "kube-apiserver-ha-525790-m02" [b3966e2e-ce3d-4916-b73c-0d80cd1793f0] Running
	I0920 18:48:37.932774  762988 system_pods.go:89] "kube-controller-manager-ha-525790" [1d695853-6a7e-487d-a52b-9aceb1fc9ff3] Running
	I0920 18:48:37.932779  762988 system_pods.go:89] "kube-controller-manager-ha-525790-m02" [090c1833-3800-4e13-b9a7-c03680f3d55d] Running
	I0920 18:48:37.932786  762988 system_pods.go:89] "kube-proxy-958jz" [46603403-eb82-4f15-a1da-da62194a072f] Running
	I0920 18:48:37.932789  762988 system_pods.go:89] "kube-proxy-sspfs" [15203515-fc45-4624-b97e-8ec247f01e2d] Running
	I0920 18:48:37.932792  762988 system_pods.go:89] "kube-scheduler-ha-525790" [8cb7e23e-c1d1-4753-9758-b17ef9fd08d7] Running
	I0920 18:48:37.932797  762988 system_pods.go:89] "kube-scheduler-ha-525790-m02" [dc9a5561-5d41-445d-a0ba-de3b2405f821] Running
	I0920 18:48:37.932800  762988 system_pods.go:89] "kube-vip-ha-525790" [0b318b1e-7a85-4c8c-8a5a-2fee226d7702] Running
	I0920 18:48:37.932805  762988 system_pods.go:89] "kube-vip-ha-525790-m02" [f2316231-5c1d-4bf2-ae62-5a4202b5818b] Running
	I0920 18:48:37.932808  762988 system_pods.go:89] "storage-provisioner" [ea6bf34f-c1f7-4216-a61f-be30846c991b] Running
	I0920 18:48:37.932815  762988 system_pods.go:126] duration metric: took 206.326319ms to wait for k8s-apps to be running ...
	I0920 18:48:37.932824  762988 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:48:37.932877  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:48:37.949333  762988 system_svc.go:56] duration metric: took 16.495186ms WaitForService to wait for kubelet
	I0920 18:48:37.949367  762988 kubeadm.go:582] duration metric: took 21.102009969s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:48:37.949386  762988 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:48:38.122741  762988 request.go:632] Waited for 173.263132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes
	I0920 18:48:38.122838  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes
	I0920 18:48:38.122859  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:38.122875  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:38.122883  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:38.126598  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:38.127344  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:48:38.127374  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:48:38.127387  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:48:38.127390  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:48:38.127395  762988 node_conditions.go:105] duration metric: took 178.00469ms to run NodePressure ...
	I0920 18:48:38.127407  762988 start.go:241] waiting for startup goroutines ...
	I0920 18:48:38.127433  762988 start.go:255] writing updated cluster config ...
	I0920 18:48:38.129743  762988 out.go:201] 
	I0920 18:48:38.131559  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:48:38.131667  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:48:38.133474  762988 out.go:177] * Starting "ha-525790-m03" control-plane node in "ha-525790" cluster
	I0920 18:48:38.134688  762988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:48:38.134716  762988 cache.go:56] Caching tarball of preloaded images
	I0920 18:48:38.134840  762988 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:48:38.134876  762988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:48:38.135002  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:48:38.135229  762988 start.go:360] acquireMachinesLock for ha-525790-m03: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:48:38.135283  762988 start.go:364] duration metric: took 31.132µs to acquireMachinesLock for "ha-525790-m03"
	I0920 18:48:38.135310  762988 start.go:93] Provisioning new machine with config: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:48:38.135483  762988 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0920 18:48:38.137252  762988 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:48:38.137351  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:48:38.137389  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:48:38.152991  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40037
	I0920 18:48:38.153403  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:48:38.153921  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:48:38.153950  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:48:38.154269  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:48:38.154503  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetMachineName
	I0920 18:48:38.154635  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:48:38.154794  762988 start.go:159] libmachine.API.Create for "ha-525790" (driver="kvm2")
	I0920 18:48:38.154827  762988 client.go:168] LocalClient.Create starting
	I0920 18:48:38.154887  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:48:38.154928  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:48:38.154951  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:48:38.155015  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:48:38.155046  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:48:38.155064  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:48:38.155089  762988 main.go:141] libmachine: Running pre-create checks...
	I0920 18:48:38.155100  762988 main.go:141] libmachine: (ha-525790-m03) Calling .PreCreateCheck
	I0920 18:48:38.155260  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetConfigRaw
	I0920 18:48:38.155601  762988 main.go:141] libmachine: Creating machine...
	I0920 18:48:38.155615  762988 main.go:141] libmachine: (ha-525790-m03) Calling .Create
	I0920 18:48:38.155731  762988 main.go:141] libmachine: (ha-525790-m03) Creating KVM machine...
	I0920 18:48:38.156940  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found existing default KVM network
	I0920 18:48:38.157092  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found existing private KVM network mk-ha-525790
	I0920 18:48:38.157240  762988 main.go:141] libmachine: (ha-525790-m03) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03 ...
	I0920 18:48:38.157269  762988 main.go:141] libmachine: (ha-525790-m03) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:48:38.157310  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:38.157208  763765 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:48:38.157402  762988 main.go:141] libmachine: (ha-525790-m03) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:48:38.440404  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:38.440283  763765 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa...
	I0920 18:48:38.491702  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:38.491581  763765 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/ha-525790-m03.rawdisk...
	I0920 18:48:38.491754  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Writing magic tar header
	I0920 18:48:38.491768  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Writing SSH key tar header
	I0920 18:48:38.491779  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:38.491723  763765 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03 ...
	I0920 18:48:38.491856  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03
	I0920 18:48:38.491883  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03 (perms=drwx------)
	I0920 18:48:38.491895  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:48:38.491911  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:48:38.491922  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:48:38.491935  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:48:38.491947  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:48:38.491958  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:48:38.491971  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:48:38.491983  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:48:38.491992  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:48:38.492002  762988 main.go:141] libmachine: (ha-525790-m03) Creating domain...
	I0920 18:48:38.492014  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:48:38.492025  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home
	I0920 18:48:38.492039  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Skipping /home - not owner
	I0920 18:48:38.492931  762988 main.go:141] libmachine: (ha-525790-m03) define libvirt domain using xml: 
	I0920 18:48:38.492957  762988 main.go:141] libmachine: (ha-525790-m03) <domain type='kvm'>
	I0920 18:48:38.492966  762988 main.go:141] libmachine: (ha-525790-m03)   <name>ha-525790-m03</name>
	I0920 18:48:38.492979  762988 main.go:141] libmachine: (ha-525790-m03)   <memory unit='MiB'>2200</memory>
	I0920 18:48:38.492990  762988 main.go:141] libmachine: (ha-525790-m03)   <vcpu>2</vcpu>
	I0920 18:48:38.492996  762988 main.go:141] libmachine: (ha-525790-m03)   <features>
	I0920 18:48:38.493008  762988 main.go:141] libmachine: (ha-525790-m03)     <acpi/>
	I0920 18:48:38.493014  762988 main.go:141] libmachine: (ha-525790-m03)     <apic/>
	I0920 18:48:38.493024  762988 main.go:141] libmachine: (ha-525790-m03)     <pae/>
	I0920 18:48:38.493031  762988 main.go:141] libmachine: (ha-525790-m03)     
	I0920 18:48:38.493036  762988 main.go:141] libmachine: (ha-525790-m03)   </features>
	I0920 18:48:38.493042  762988 main.go:141] libmachine: (ha-525790-m03)   <cpu mode='host-passthrough'>
	I0920 18:48:38.493047  762988 main.go:141] libmachine: (ha-525790-m03)   
	I0920 18:48:38.493051  762988 main.go:141] libmachine: (ha-525790-m03)   </cpu>
	I0920 18:48:38.493058  762988 main.go:141] libmachine: (ha-525790-m03)   <os>
	I0920 18:48:38.493071  762988 main.go:141] libmachine: (ha-525790-m03)     <type>hvm</type>
	I0920 18:48:38.493106  762988 main.go:141] libmachine: (ha-525790-m03)     <boot dev='cdrom'/>
	I0920 18:48:38.493129  762988 main.go:141] libmachine: (ha-525790-m03)     <boot dev='hd'/>
	I0920 18:48:38.493143  762988 main.go:141] libmachine: (ha-525790-m03)     <bootmenu enable='no'/>
	I0920 18:48:38.493157  762988 main.go:141] libmachine: (ha-525790-m03)   </os>
	I0920 18:48:38.493169  762988 main.go:141] libmachine: (ha-525790-m03)   <devices>
	I0920 18:48:38.493180  762988 main.go:141] libmachine: (ha-525790-m03)     <disk type='file' device='cdrom'>
	I0920 18:48:38.493199  762988 main.go:141] libmachine: (ha-525790-m03)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/boot2docker.iso'/>
	I0920 18:48:38.493210  762988 main.go:141] libmachine: (ha-525790-m03)       <target dev='hdc' bus='scsi'/>
	I0920 18:48:38.493219  762988 main.go:141] libmachine: (ha-525790-m03)       <readonly/>
	I0920 18:48:38.493233  762988 main.go:141] libmachine: (ha-525790-m03)     </disk>
	I0920 18:48:38.493245  762988 main.go:141] libmachine: (ha-525790-m03)     <disk type='file' device='disk'>
	I0920 18:48:38.493262  762988 main.go:141] libmachine: (ha-525790-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:48:38.493279  762988 main.go:141] libmachine: (ha-525790-m03)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/ha-525790-m03.rawdisk'/>
	I0920 18:48:38.493292  762988 main.go:141] libmachine: (ha-525790-m03)       <target dev='hda' bus='virtio'/>
	I0920 18:48:38.493309  762988 main.go:141] libmachine: (ha-525790-m03)     </disk>
	I0920 18:48:38.493325  762988 main.go:141] libmachine: (ha-525790-m03)     <interface type='network'>
	I0920 18:48:38.493333  762988 main.go:141] libmachine: (ha-525790-m03)       <source network='mk-ha-525790'/>
	I0920 18:48:38.493341  762988 main.go:141] libmachine: (ha-525790-m03)       <model type='virtio'/>
	I0920 18:48:38.493348  762988 main.go:141] libmachine: (ha-525790-m03)     </interface>
	I0920 18:48:38.493354  762988 main.go:141] libmachine: (ha-525790-m03)     <interface type='network'>
	I0920 18:48:38.493361  762988 main.go:141] libmachine: (ha-525790-m03)       <source network='default'/>
	I0920 18:48:38.493368  762988 main.go:141] libmachine: (ha-525790-m03)       <model type='virtio'/>
	I0920 18:48:38.493373  762988 main.go:141] libmachine: (ha-525790-m03)     </interface>
	I0920 18:48:38.493379  762988 main.go:141] libmachine: (ha-525790-m03)     <serial type='pty'>
	I0920 18:48:38.493384  762988 main.go:141] libmachine: (ha-525790-m03)       <target port='0'/>
	I0920 18:48:38.493391  762988 main.go:141] libmachine: (ha-525790-m03)     </serial>
	I0920 18:48:38.493400  762988 main.go:141] libmachine: (ha-525790-m03)     <console type='pty'>
	I0920 18:48:38.493407  762988 main.go:141] libmachine: (ha-525790-m03)       <target type='serial' port='0'/>
	I0920 18:48:38.493412  762988 main.go:141] libmachine: (ha-525790-m03)     </console>
	I0920 18:48:38.493418  762988 main.go:141] libmachine: (ha-525790-m03)     <rng model='virtio'>
	I0920 18:48:38.493427  762988 main.go:141] libmachine: (ha-525790-m03)       <backend model='random'>/dev/random</backend>
	I0920 18:48:38.493440  762988 main.go:141] libmachine: (ha-525790-m03)     </rng>
	I0920 18:48:38.493450  762988 main.go:141] libmachine: (ha-525790-m03)     
	I0920 18:48:38.493460  762988 main.go:141] libmachine: (ha-525790-m03)     
	I0920 18:48:38.493468  762988 main.go:141] libmachine: (ha-525790-m03)   </devices>
	I0920 18:48:38.493474  762988 main.go:141] libmachine: (ha-525790-m03) </domain>
	I0920 18:48:38.493482  762988 main.go:141] libmachine: (ha-525790-m03) 
	I0920 18:48:38.499885  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:a8:31:1e in network default
	I0920 18:48:38.500386  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:38.500420  762988 main.go:141] libmachine: (ha-525790-m03) Ensuring networks are active...
	I0920 18:48:38.501164  762988 main.go:141] libmachine: (ha-525790-m03) Ensuring network default is active
	I0920 18:48:38.501467  762988 main.go:141] libmachine: (ha-525790-m03) Ensuring network mk-ha-525790 is active
	I0920 18:48:38.501827  762988 main.go:141] libmachine: (ha-525790-m03) Getting domain xml...
	I0920 18:48:38.502449  762988 main.go:141] libmachine: (ha-525790-m03) Creating domain...
	I0920 18:48:39.736443  762988 main.go:141] libmachine: (ha-525790-m03) Waiting to get IP...
	I0920 18:48:39.737400  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:39.737834  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:39.737861  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:39.737801  763765 retry.go:31] will retry after 302.940885ms: waiting for machine to come up
	I0920 18:48:40.042424  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:40.043046  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:40.043071  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:40.042996  763765 retry.go:31] will retry after 350.440595ms: waiting for machine to come up
	I0920 18:48:40.395674  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:40.396221  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:40.396257  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:40.396163  763765 retry.go:31] will retry after 469.287011ms: waiting for machine to come up
	I0920 18:48:40.866499  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:40.866994  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:40.867018  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:40.866942  763765 retry.go:31] will retry after 590.023713ms: waiting for machine to come up
	I0920 18:48:41.458823  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:41.459324  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:41.459354  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:41.459270  763765 retry.go:31] will retry after 548.369209ms: waiting for machine to come up
	I0920 18:48:42.009043  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:42.009525  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:42.009554  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:42.009477  763765 retry.go:31] will retry after 690.597661ms: waiting for machine to come up
	I0920 18:48:42.701450  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:42.701900  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:42.701929  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:42.701849  763765 retry.go:31] will retry after 975.285461ms: waiting for machine to come up
	I0920 18:48:43.678426  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:43.678873  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:43.678903  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:43.678807  763765 retry.go:31] will retry after 921.744359ms: waiting for machine to come up
	I0920 18:48:44.601892  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:44.602442  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:44.602473  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:44.602393  763765 retry.go:31] will retry after 1.426461906s: waiting for machine to come up
	I0920 18:48:46.031141  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:46.031614  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:46.031647  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:46.031561  763765 retry.go:31] will retry after 1.995117324s: waiting for machine to come up
	I0920 18:48:48.028189  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:48.028849  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:48.028882  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:48.028801  763765 retry.go:31] will retry after 2.180775421s: waiting for machine to come up
	I0920 18:48:50.212117  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:50.212617  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:50.212648  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:50.212544  763765 retry.go:31] will retry after 2.921621074s: waiting for machine to come up
	I0920 18:48:53.136087  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:53.136635  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:53.136663  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:53.136590  763765 retry.go:31] will retry after 2.977541046s: waiting for machine to come up
	I0920 18:48:56.115874  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:56.116235  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:56.116257  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:56.116195  763765 retry.go:31] will retry after 3.995277529s: waiting for machine to come up
	I0920 18:49:00.113196  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.113677  762988 main.go:141] libmachine: (ha-525790-m03) Found IP for machine: 192.168.39.105
	I0920 18:49:00.113703  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has current primary IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.113712  762988 main.go:141] libmachine: (ha-525790-m03) Reserving static IP address...
	I0920 18:49:00.114010  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find host DHCP lease matching {name: "ha-525790-m03", mac: "52:54:00:c8:21:86", ip: "192.168.39.105"} in network mk-ha-525790
	I0920 18:49:00.188644  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Getting to WaitForSSH function...
	I0920 18:49:00.188711  762988 main.go:141] libmachine: (ha-525790-m03) Reserved static IP address: 192.168.39.105
	I0920 18:49:00.188740  762988 main.go:141] libmachine: (ha-525790-m03) Waiting for SSH to be available...
	I0920 18:49:00.191758  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.192256  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.192284  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.192476  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Using SSH client type: external
	I0920 18:49:00.192503  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa (-rw-------)
	I0920 18:49:00.192535  762988 main.go:141] libmachine: (ha-525790-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:49:00.192565  762988 main.go:141] libmachine: (ha-525790-m03) DBG | About to run SSH command:
	I0920 18:49:00.192608  762988 main.go:141] libmachine: (ha-525790-m03) DBG | exit 0
	I0920 18:49:00.319098  762988 main.go:141] libmachine: (ha-525790-m03) DBG | SSH cmd err, output: <nil>: 
	I0920 18:49:00.319375  762988 main.go:141] libmachine: (ha-525790-m03) KVM machine creation complete!
	I0920 18:49:00.319707  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetConfigRaw
	I0920 18:49:00.320287  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:00.320484  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:00.320624  762988 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:49:00.320639  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetState
	I0920 18:49:00.321930  762988 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:49:00.321949  762988 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:49:00.321957  762988 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:49:00.321965  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.324623  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.325172  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.325194  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.325388  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:00.325587  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.325771  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.325922  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:00.326093  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:00.326319  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:00.326331  762988 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:49:00.430187  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:49:00.430218  762988 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:49:00.430229  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.433076  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.433420  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.433448  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.433596  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:00.433812  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.433990  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.434135  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:00.434275  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:00.434454  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:00.434466  762988 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:49:00.539754  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:49:00.539823  762988 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:49:00.539832  762988 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:49:00.539852  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetMachineName
	I0920 18:49:00.540100  762988 buildroot.go:166] provisioning hostname "ha-525790-m03"
	I0920 18:49:00.540117  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetMachineName
	I0920 18:49:00.540338  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.543112  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.543620  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.543653  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.543781  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:00.543968  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.544100  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.544196  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:00.544321  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:00.544478  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:00.544494  762988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790-m03 && echo "ha-525790-m03" | sudo tee /etc/hostname
	I0920 18:49:00.661965  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790-m03
	
	I0920 18:49:00.661996  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.665201  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.665573  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.665605  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.665825  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:00.666001  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.666174  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.666276  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:00.666436  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:00.666619  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:00.666635  762988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:49:00.779769  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:49:00.779801  762988 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:49:00.779819  762988 buildroot.go:174] setting up certificates
	I0920 18:49:00.779830  762988 provision.go:84] configureAuth start
	I0920 18:49:00.779838  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetMachineName
	I0920 18:49:00.780148  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetIP
	I0920 18:49:00.783087  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.783547  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.783572  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.783793  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.786303  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.786669  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.786697  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.786832  762988 provision.go:143] copyHostCerts
	I0920 18:49:00.786879  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:49:00.786917  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:49:00.786928  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:49:00.787003  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:49:00.787095  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:49:00.787123  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:49:00.787129  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:49:00.787169  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:49:00.787241  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:49:00.787266  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:49:00.787273  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:49:00.787297  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:49:00.787351  762988 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790-m03 san=[127.0.0.1 192.168.39.105 ha-525790-m03 localhost minikube]
	I0920 18:49:01.027593  762988 provision.go:177] copyRemoteCerts
	I0920 18:49:01.027666  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:49:01.027706  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.030883  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.031239  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.031269  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.031374  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.031584  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.031757  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.031880  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:49:01.112943  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:49:01.113017  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:49:01.137911  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:49:01.138012  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:49:01.162029  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:49:01.162099  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:49:01.186294  762988 provision.go:87] duration metric: took 406.448312ms to configureAuth
	I0920 18:49:01.186330  762988 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:49:01.186601  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:49:01.186679  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.189283  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.189565  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.189599  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.189778  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.190004  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.190151  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.190284  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.190437  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:01.190651  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:01.190666  762988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:49:01.415670  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:49:01.415702  762988 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:49:01.415710  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetURL
	I0920 18:49:01.417024  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Using libvirt version 6000000
	I0920 18:49:01.419032  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.419386  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.419434  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.419554  762988 main.go:141] libmachine: Docker is up and running!
	I0920 18:49:01.419580  762988 main.go:141] libmachine: Reticulating splines...
	I0920 18:49:01.419588  762988 client.go:171] duration metric: took 23.264752776s to LocalClient.Create
	I0920 18:49:01.419627  762988 start.go:167] duration metric: took 23.26482906s to libmachine.API.Create "ha-525790"
	I0920 18:49:01.419643  762988 start.go:293] postStartSetup for "ha-525790-m03" (driver="kvm2")
	I0920 18:49:01.419656  762988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:49:01.419679  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.419934  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:49:01.419967  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.422004  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.422361  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.422390  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.422501  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.422709  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.422888  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.423046  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:49:01.505266  762988 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:49:01.509857  762988 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:49:01.509888  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:49:01.509961  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:49:01.510060  762988 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:49:01.510077  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:49:01.510189  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:49:01.520278  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:49:01.544737  762988 start.go:296] duration metric: took 125.077677ms for postStartSetup
	I0920 18:49:01.544786  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetConfigRaw
	I0920 18:49:01.545420  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetIP
	I0920 18:49:01.548112  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.548447  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.548464  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.548782  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:49:01.549036  762988 start.go:128] duration metric: took 23.413540127s to createHost
	I0920 18:49:01.549067  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.551495  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.551851  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.551881  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.552018  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.552201  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.552360  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.552475  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.552663  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:01.552890  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:01.552905  762988 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:49:01.655748  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858141.628739337
	
	I0920 18:49:01.655773  762988 fix.go:216] guest clock: 1726858141.628739337
	I0920 18:49:01.655781  762988 fix.go:229] Guest: 2024-09-20 18:49:01.628739337 +0000 UTC Remote: 2024-09-20 18:49:01.549050778 +0000 UTC m=+142.798112058 (delta=79.688559ms)
	I0920 18:49:01.655798  762988 fix.go:200] guest clock delta is within tolerance: 79.688559ms
	I0920 18:49:01.655803  762988 start.go:83] releasing machines lock for "ha-525790-m03", held for 23.520508822s
	I0920 18:49:01.655836  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.656125  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetIP
	I0920 18:49:01.658823  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.659297  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.659334  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.661900  762988 out.go:177] * Found network options:
	I0920 18:49:01.663362  762988 out.go:177]   - NO_PROXY=192.168.39.149,192.168.39.246
	W0920 18:49:01.664757  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 18:49:01.664778  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:49:01.664795  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.665398  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.665614  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.665705  762988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:49:01.665745  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	W0920 18:49:01.665812  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 18:49:01.665852  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:49:01.665930  762988 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:49:01.665957  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.668602  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.668630  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.669063  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.669134  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.669160  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.669251  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.669405  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.669623  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.669648  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.669763  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.669772  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.669900  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.669898  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:49:01.670073  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:49:01.914294  762988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:49:01.920631  762988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:49:01.920746  762988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:49:01.939203  762988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:49:01.939233  762988 start.go:495] detecting cgroup driver to use...
	I0920 18:49:01.939298  762988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:49:01.956879  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:49:01.972680  762988 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:49:01.972737  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:49:01.986983  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:49:02.002057  762988 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:49:02.127309  762988 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:49:02.284949  762988 docker.go:233] disabling docker service ...
	I0920 18:49:02.285026  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:49:02.300753  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:49:02.314717  762988 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:49:02.455235  762988 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:49:02.575677  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:49:02.589417  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:49:02.609243  762988 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:49:02.609306  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.619812  762988 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:49:02.619883  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.630268  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.640696  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.651017  762988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:49:02.661779  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.672169  762988 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.689257  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.699324  762988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:49:02.708522  762988 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:49:02.708581  762988 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:49:02.724380  762988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:49:02.735250  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:49:02.845773  762988 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:49:02.940137  762988 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:49:02.940234  762988 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:49:02.945137  762988 start.go:563] Will wait 60s for crictl version
	I0920 18:49:02.945195  762988 ssh_runner.go:195] Run: which crictl
	I0920 18:49:02.949025  762988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:49:02.985466  762988 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:49:02.985563  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:49:03.014070  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:49:03.043847  762988 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:49:03.045096  762988 out.go:177]   - env NO_PROXY=192.168.39.149
	I0920 18:49:03.046434  762988 out.go:177]   - env NO_PROXY=192.168.39.149,192.168.39.246
	I0920 18:49:03.047542  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetIP
	I0920 18:49:03.050349  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:03.050680  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:03.050706  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:03.050945  762988 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:49:03.055055  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:49:03.067151  762988 mustload.go:65] Loading cluster: ha-525790
	I0920 18:49:03.067360  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:49:03.067653  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:49:03.067702  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:49:03.083141  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I0920 18:49:03.083620  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:49:03.084155  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:49:03.084195  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:49:03.084513  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:49:03.084805  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:49:03.086455  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:49:03.086791  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:49:03.086828  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:49:03.102141  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39347
	I0920 18:49:03.102510  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:49:03.103060  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:49:03.103086  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:49:03.103433  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:49:03.103638  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:49:03.103800  762988 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.105
	I0920 18:49:03.103812  762988 certs.go:194] generating shared ca certs ...
	I0920 18:49:03.103827  762988 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:49:03.103970  762988 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:49:03.104025  762988 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:49:03.104040  762988 certs.go:256] generating profile certs ...
	I0920 18:49:03.104161  762988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:49:03.104187  762988 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.482e4680
	I0920 18:49:03.104203  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.482e4680 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.246 192.168.39.105 192.168.39.254]
	I0920 18:49:03.247720  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.482e4680 ...
	I0920 18:49:03.247759  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.482e4680: {Name:mk130da53fe193e08a7298b921e0e7264fd28276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:49:03.247934  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.482e4680 ...
	I0920 18:49:03.247946  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.482e4680: {Name:mk01fbdfb06a85f266d7928f14dec501e347df1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:49:03.248017  762988 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.482e4680 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:49:03.248149  762988 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.482e4680 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:49:03.248278  762988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:49:03.248294  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:49:03.248307  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:49:03.248321  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:49:03.248333  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:49:03.248345  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:49:03.248357  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:49:03.248369  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:49:03.270972  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:49:03.271068  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:49:03.271105  762988 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:49:03.271116  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:49:03.271137  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:49:03.271158  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:49:03.271180  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:49:03.271215  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:49:03.271243  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:49:03.271257  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:49:03.271268  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:49:03.271305  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:49:03.274365  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:49:03.274796  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:49:03.274826  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:49:03.275040  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:49:03.275257  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:49:03.275432  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:49:03.275609  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:49:03.347244  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 18:49:03.352573  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 18:49:03.366074  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 18:49:03.370940  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0920 18:49:03.383525  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 18:49:03.387790  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 18:49:03.401524  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 18:49:03.406898  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0920 18:49:03.418198  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 18:49:03.422213  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 18:49:03.432483  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 18:49:03.436644  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 18:49:03.447720  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:49:03.473142  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:49:03.497800  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:49:03.522032  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:49:03.546357  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0920 18:49:03.569451  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:49:03.592748  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:49:03.618320  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:49:03.643316  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:49:03.669027  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:49:03.693106  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:49:03.717412  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 18:49:03.736210  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0920 18:49:03.752820  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 18:49:03.769208  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0920 18:49:03.786468  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 18:49:03.803392  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 18:49:03.819806  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 18:49:03.836525  762988 ssh_runner.go:195] Run: openssl version
	I0920 18:49:03.842244  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:49:03.852769  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:49:03.857540  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:49:03.857596  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:49:03.863268  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:49:03.873806  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:49:03.884262  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:49:03.888603  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:49:03.888657  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:49:03.894115  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:49:03.904764  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:49:03.915491  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:49:03.920009  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:49:03.920061  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:49:03.925625  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:49:03.936257  762988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:49:03.940216  762988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:49:03.940272  762988 kubeadm.go:934] updating node {m03 192.168.39.105 8443 v1.31.1 crio true true} ...
	I0920 18:49:03.940372  762988 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:49:03.940409  762988 kube-vip.go:115] generating kube-vip config ...
	I0920 18:49:03.940448  762988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:49:03.957917  762988 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:49:03.958005  762988 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:49:03.958067  762988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:49:03.967572  762988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 18:49:03.967624  762988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 18:49:03.976974  762988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 18:49:03.976987  762988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 18:49:03.977005  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:49:03.976978  762988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 18:49:03.977048  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:49:03.977060  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:49:03.977022  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:49:03.977160  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:49:03.986571  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 18:49:03.986605  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 18:49:03.986658  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 18:49:03.986692  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 18:49:04.010382  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:49:04.010507  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:49:04.099814  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 18:49:04.099870  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 18:49:04.872454  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 18:49:04.882387  762988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:49:04.899462  762988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:49:04.916731  762988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:49:04.933245  762988 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:49:04.937315  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:49:04.950503  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:49:05.076487  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:49:05.092667  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:49:05.093146  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:49:05.093208  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:49:05.109982  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37499
	I0920 18:49:05.110528  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:49:05.111155  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:49:05.111179  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:49:05.111484  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:49:05.111774  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:49:05.111942  762988 start.go:317] joinCluster: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:49:05.112135  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 18:49:05.112159  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:49:05.115062  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:49:05.115484  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:49:05.115515  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:49:05.115682  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:49:05.115883  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:49:05.116066  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:49:05.116238  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:49:05.305796  762988 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:49:05.305864  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 39ds8x.uncxzpvszbuvr57z --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-525790-m03 --control-plane --apiserver-advertise-address=192.168.39.105 --apiserver-bind-port=8443"
	I0920 18:49:27.719468  762988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 39ds8x.uncxzpvszbuvr57z --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-525790-m03 --control-plane --apiserver-advertise-address=192.168.39.105 --apiserver-bind-port=8443": (22.413569312s)
	I0920 18:49:27.719513  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 18:49:28.224417  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-525790-m03 minikube.k8s.io/updated_at=2024_09_20T18_49_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=ha-525790 minikube.k8s.io/primary=false
	I0920 18:49:28.363168  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-525790-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 18:49:28.483620  762988 start.go:319] duration metric: took 23.371650439s to joinCluster
	I0920 18:49:28.484099  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:49:28.484156  762988 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:49:28.485758  762988 out.go:177] * Verifying Kubernetes components...
	I0920 18:49:28.487390  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:49:28.832062  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:49:28.888819  762988 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:49:28.889070  762988 kapi.go:59] client config for ha-525790: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key", CAFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 18:49:28.889131  762988 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.149:8443
	I0920 18:49:28.889340  762988 node_ready.go:35] waiting up to 6m0s for node "ha-525790-m03" to be "Ready" ...
	I0920 18:49:28.889437  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:28.889450  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:28.889462  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:28.889469  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:28.893312  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:29.389975  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:29.390001  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:29.390011  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:29.390015  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:29.393538  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:29.890123  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:29.890149  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:29.890162  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:29.890171  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:29.894353  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:30.390136  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:30.390164  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:30.390176  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:30.390181  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:30.393957  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:30.890420  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:30.890442  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:30.890458  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:30.890462  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:30.895075  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:30.895862  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:31.389871  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:31.389893  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:31.389902  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:31.389907  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:31.393271  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:31.890390  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:31.890411  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:31.890419  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:31.890423  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:31.894048  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:32.389848  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:32.389870  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:32.389879  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:32.389884  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:32.393339  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:32.890299  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:32.890328  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:32.890338  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:32.890343  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:32.893810  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:33.390110  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:33.390140  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:33.390152  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:33.390157  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:33.393525  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:33.393988  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:33.890279  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:33.890305  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:33.890317  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:33.890326  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:33.894103  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:34.389629  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:34.389653  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:34.389661  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:34.389666  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:34.393423  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:34.889832  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:34.889861  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:34.889872  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:34.889878  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:34.894113  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:35.389632  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:35.389653  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:35.389661  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:35.389668  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:35.392384  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:35.890106  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:35.890141  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:35.890153  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:35.890158  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:35.893183  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:35.893799  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:36.390240  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:36.390262  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:36.390275  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:36.390280  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:36.394094  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:36.890179  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:36.890202  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:36.890211  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:36.890216  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:36.893745  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:37.389770  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:37.389795  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:37.389804  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:37.389810  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:37.393011  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:37.889970  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:37.889992  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:37.890000  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:37.890006  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:37.893447  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:37.893999  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:38.389862  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:38.389886  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:38.389894  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:38.389898  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:38.393578  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:38.889977  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:38.890002  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:38.890015  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:38.890023  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:38.894709  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:39.389961  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:39.389985  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:39.389994  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:39.389997  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:39.393445  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:39.889607  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:39.889639  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:39.889646  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:39.889650  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:39.893375  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:39.894029  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:40.389658  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:40.389687  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:40.389699  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:40.389716  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:40.393116  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:40.890100  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:40.890123  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:40.890130  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:40.890135  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:40.893347  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:41.389584  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:41.389611  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:41.389626  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:41.389630  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:41.393223  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:41.890328  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:41.890352  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:41.890361  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:41.890366  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:41.894247  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:41.894758  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:42.390094  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:42.390118  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:42.390125  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:42.390129  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:42.393818  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:42.890390  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:42.890413  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:42.890421  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:42.890426  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:42.893913  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:43.390304  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:43.390325  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.390334  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.390338  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.393629  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:43.394194  762988 node_ready.go:49] node "ha-525790-m03" has status "Ready":"True"
	I0920 18:49:43.394215  762988 node_ready.go:38] duration metric: took 14.504859113s for node "ha-525790-m03" to be "Ready" ...
	I0920 18:49:43.394227  762988 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:49:43.394317  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:43.394332  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.394342  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.394349  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.399934  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:49:43.406601  762988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.406680  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nfnkj
	I0920 18:49:43.406688  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.406695  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.406698  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.409686  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.410357  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:43.410375  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.410382  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.410387  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.413203  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.414003  762988 pod_ready.go:93] pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.414026  762988 pod_ready.go:82] duration metric: took 7.399649ms for pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.414037  762988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.414110  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rpcds
	I0920 18:49:43.414120  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.414132  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.414139  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.416709  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.417387  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:43.417403  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.417411  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.417414  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.419923  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.420442  762988 pod_ready.go:93] pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.420459  762988 pod_ready.go:82] duration metric: took 6.41605ms for pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.420467  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.420515  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790
	I0920 18:49:43.420523  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.420529  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.420533  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.422830  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.423442  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:43.423459  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.423470  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.423476  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.425740  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.426292  762988 pod_ready.go:93] pod "etcd-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.426309  762988 pod_ready.go:82] duration metric: took 5.837018ms for pod "etcd-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.426318  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.426372  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790-m02
	I0920 18:49:43.426378  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.426385  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.426392  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.428740  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.429271  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:43.429289  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.429295  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.429301  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.431315  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.431859  762988 pod_ready.go:93] pod "etcd-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.431880  762988 pod_ready.go:82] duration metric: took 5.554102ms for pod "etcd-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.431888  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.591305  762988 request.go:632] Waited for 159.354613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790-m03
	I0920 18:49:43.591397  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790-m03
	I0920 18:49:43.591408  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.591418  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.591426  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.594816  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:43.790451  762988 request.go:632] Waited for 194.957771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:43.790546  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:43.790557  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.790567  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.790572  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.793782  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:43.794516  762988 pod_ready.go:93] pod "etcd-ha-525790-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.794545  762988 pod_ready.go:82] duration metric: took 362.651207ms for pod "etcd-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.794561  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.990932  762988 request.go:632] Waited for 196.293385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790
	I0920 18:49:43.991032  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790
	I0920 18:49:43.991044  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.991055  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.991070  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.994301  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.191298  762988 request.go:632] Waited for 196.219991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:44.191370  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:44.191378  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.191385  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.191391  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.195180  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.195974  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:44.195997  762988 pod_ready.go:82] duration metric: took 401.428334ms for pod "kube-apiserver-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.196011  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.390919  762988 request.go:632] Waited for 194.788684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m02
	I0920 18:49:44.390990  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m02
	I0920 18:49:44.390995  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.391003  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.391008  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.394492  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.591289  762988 request.go:632] Waited for 196.078558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:44.591352  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:44.591358  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.591365  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.591370  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.595290  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.596291  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:44.596314  762988 pod_ready.go:82] duration metric: took 400.296135ms for pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.596325  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.790722  762988 request.go:632] Waited for 194.31856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m03
	I0920 18:49:44.790804  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m03
	I0920 18:49:44.790810  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.790818  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.790822  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.794357  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.990524  762988 request.go:632] Waited for 195.282104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:44.990631  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:44.990644  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.990655  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.990665  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.994191  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.994903  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:44.994929  762988 pod_ready.go:82] duration metric: took 398.597843ms for pod "kube-apiserver-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.994944  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.191368  762988 request.go:632] Waited for 196.335448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790
	I0920 18:49:45.191459  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790
	I0920 18:49:45.191467  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.191475  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.191483  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.195161  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:45.391240  762988 request.go:632] Waited for 195.352512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:45.391325  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:45.391333  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.391341  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.391346  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.396237  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:45.397053  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:45.397069  762988 pod_ready.go:82] duration metric: took 402.117627ms for pod "kube-controller-manager-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.397080  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.590744  762988 request.go:632] Waited for 193.581272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m02
	I0920 18:49:45.590855  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m02
	I0920 18:49:45.590865  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.590877  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.590883  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.594359  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:45.791023  762988 request.go:632] Waited for 195.208519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:45.791108  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:45.791116  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.791126  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.791131  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.794779  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:45.795437  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:45.795459  762988 pod_ready.go:82] duration metric: took 398.37091ms for pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.795469  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.990550  762988 request.go:632] Waited for 195.001281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m03
	I0920 18:49:45.990624  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m03
	I0920 18:49:45.990630  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.990638  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.990643  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.994052  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:46.191122  762988 request.go:632] Waited for 196.353155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:46.191247  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:46.191259  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.191268  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.191274  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.194216  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:46.194981  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:46.195002  762988 pod_ready.go:82] duration metric: took 399.526934ms for pod "kube-controller-manager-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.195013  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-958jz" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.390922  762988 request.go:632] Waited for 195.832956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-958jz
	I0920 18:49:46.391009  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-958jz
	I0920 18:49:46.391020  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.391029  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.391035  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.394008  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:46.591177  762988 request.go:632] Waited for 196.363553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:46.591252  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:46.591257  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.591267  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.591274  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.594463  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:46.595077  762988 pod_ready.go:93] pod "kube-proxy-958jz" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:46.595099  762988 pod_ready.go:82] duration metric: took 400.079203ms for pod "kube-proxy-958jz" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.595109  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dx9pg" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.791219  762988 request.go:632] Waited for 195.994883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dx9pg
	I0920 18:49:46.791280  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dx9pg
	I0920 18:49:46.791285  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.791294  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.791299  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.794750  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:46.990905  762988 request.go:632] Waited for 195.399247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:46.990977  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:46.990982  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.990990  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.990998  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.994578  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:46.995251  762988 pod_ready.go:93] pod "kube-proxy-dx9pg" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:46.995275  762988 pod_ready.go:82] duration metric: took 400.160371ms for pod "kube-proxy-dx9pg" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.995288  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sspfs" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.191109  762988 request.go:632] Waited for 195.732991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sspfs
	I0920 18:49:47.191198  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sspfs
	I0920 18:49:47.191209  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.191220  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.191229  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.194285  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:47.390397  762988 request.go:632] Waited for 195.278961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:47.390485  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:47.390494  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.390502  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.390509  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.394123  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:47.394634  762988 pod_ready.go:93] pod "kube-proxy-sspfs" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:47.394658  762988 pod_ready.go:82] duration metric: took 399.362351ms for pod "kube-proxy-sspfs" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.394668  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.590688  762988 request.go:632] Waited for 195.932452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790
	I0920 18:49:47.590750  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790
	I0920 18:49:47.590756  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.590766  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.590773  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.594088  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:47.791044  762988 request.go:632] Waited for 196.393517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:47.791127  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:47.791137  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.791151  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.791160  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.794795  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:47.795601  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:47.795620  762988 pod_ready.go:82] duration metric: took 400.94539ms for pod "kube-scheduler-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.795629  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.990769  762988 request.go:632] Waited for 195.033171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m02
	I0920 18:49:47.990860  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m02
	I0920 18:49:47.990871  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.990883  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.990894  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.994202  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.191063  762988 request.go:632] Waited for 196.257455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:48.191127  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:48.191134  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.191144  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.191149  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.194376  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.194886  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:48.194906  762988 pod_ready.go:82] duration metric: took 399.270985ms for pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:48.194915  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:48.390935  762988 request.go:632] Waited for 195.938247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m03
	I0920 18:49:48.391011  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m03
	I0920 18:49:48.391029  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.391064  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.391074  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.394097  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.591276  762988 request.go:632] Waited for 196.398543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:48.591340  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:48.591351  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.591359  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.591363  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.594456  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.595126  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:48.595147  762988 pod_ready.go:82] duration metric: took 400.225521ms for pod "kube-scheduler-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:48.595159  762988 pod_ready.go:39] duration metric: took 5.200916863s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:49:48.595173  762988 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:49:48.595224  762988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:49:48.611081  762988 api_server.go:72] duration metric: took 20.126887425s to wait for apiserver process to appear ...
	I0920 18:49:48.611105  762988 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:49:48.611130  762988 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0920 18:49:48.616371  762988 api_server.go:279] https://192.168.39.149:8443/healthz returned 200:
	ok
	I0920 18:49:48.616442  762988 round_trippers.go:463] GET https://192.168.39.149:8443/version
	I0920 18:49:48.616450  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.616461  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.616470  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.617373  762988 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 18:49:48.617437  762988 api_server.go:141] control plane version: v1.31.1
	I0920 18:49:48.617451  762988 api_server.go:131] duration metric: took 6.339029ms to wait for apiserver health ...
	I0920 18:49:48.617458  762988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:49:48.790943  762988 request.go:632] Waited for 173.409092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:48.791019  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:48.791024  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.791031  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.791035  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.799193  762988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 18:49:48.807423  762988 system_pods.go:59] 24 kube-system pods found
	I0920 18:49:48.807457  762988 system_pods.go:61] "coredns-7c65d6cfc9-nfnkj" [7994989d-6bfa-4d25-b7b7-662d2e6c742c] Running
	I0920 18:49:48.807464  762988 system_pods.go:61] "coredns-7c65d6cfc9-rpcds" [7db58219-7147-4a45-b233-ef3c698566ef] Running
	I0920 18:49:48.807470  762988 system_pods.go:61] "etcd-ha-525790" [f23cd40e-ac8d-451b-9bf9-2ef5d62ef4b6] Running
	I0920 18:49:48.807476  762988 system_pods.go:61] "etcd-ha-525790-m02" [5a29103e-6da3-40d1-be3c-58fdc0f28b54] Running
	I0920 18:49:48.807480  762988 system_pods.go:61] "etcd-ha-525790-m03" [33df920f-e346-4613-af3b-67042a9db421] Running
	I0920 18:49:48.807485  762988 system_pods.go:61] "kindnet-8glgp" [f462782e-1ff6-410a-8359-de3360d380b0] Running
	I0920 18:49:48.807489  762988 system_pods.go:61] "kindnet-9qbm6" [87e8ae18-a561-48ec-9835-27446b6917d3] Running
	I0920 18:49:48.807493  762988 system_pods.go:61] "kindnet-j5mmq" [9ecd60f9-bfbf-4292-8449-869dd3afa02c] Running
	I0920 18:49:48.807498  762988 system_pods.go:61] "kube-apiserver-ha-525790" [0e3563fd-5185-4dc6-8d9b-a7d954b96c8d] Running
	I0920 18:49:48.807503  762988 system_pods.go:61] "kube-apiserver-ha-525790-m02" [b3966e2e-ce3d-4916-b73c-0d80cd1793f0] Running
	I0920 18:49:48.807508  762988 system_pods.go:61] "kube-apiserver-ha-525790-m03" [7649543a-3c54-4627-8a0a-bc1945712ad7] Running
	I0920 18:49:48.807514  762988 system_pods.go:61] "kube-controller-manager-ha-525790" [1d695853-6a7e-487d-a52b-9aceb1fc9ff3] Running
	I0920 18:49:48.807519  762988 system_pods.go:61] "kube-controller-manager-ha-525790-m02" [090c1833-3800-4e13-b9a7-c03680f3d55d] Running
	I0920 18:49:48.807524  762988 system_pods.go:61] "kube-controller-manager-ha-525790-m03" [5e675da3-2dd4-417a-a6f8-d4fe90da0ac0] Running
	I0920 18:49:48.807529  762988 system_pods.go:61] "kube-proxy-958jz" [46603403-eb82-4f15-a1da-da62194a072f] Running
	I0920 18:49:48.807535  762988 system_pods.go:61] "kube-proxy-dx9pg" [aa873f4e-a8f0-49ab-95e9-d81d15b650f5] Running
	I0920 18:49:48.807543  762988 system_pods.go:61] "kube-proxy-sspfs" [15203515-fc45-4624-b97e-8ec247f01e2d] Running
	I0920 18:49:48.807550  762988 system_pods.go:61] "kube-scheduler-ha-525790" [8cb7e23e-c1d1-4753-9758-b17ef9fd08d7] Running
	I0920 18:49:48.807556  762988 system_pods.go:61] "kube-scheduler-ha-525790-m02" [dc9a5561-5d41-445d-a0ba-de3b2405f821] Running
	I0920 18:49:48.807562  762988 system_pods.go:61] "kube-scheduler-ha-525790-m03" [729fa556-4301-49a9-8ed0-506ecb3a8b76] Running
	I0920 18:49:48.807567  762988 system_pods.go:61] "kube-vip-ha-525790" [0b318b1e-7a85-4c8c-8a5a-2fee226d7702] Running
	I0920 18:49:48.807576  762988 system_pods.go:61] "kube-vip-ha-525790-m02" [f2316231-5c1d-4bf2-ae62-5a4202b5818b] Running
	I0920 18:49:48.807581  762988 system_pods.go:61] "kube-vip-ha-525790-m03" [3050094c-de2a-449f-866c-0e8ddceb697d] Running
	I0920 18:49:48.807587  762988 system_pods.go:61] "storage-provisioner" [ea6bf34f-c1f7-4216-a61f-be30846c991b] Running
	I0920 18:49:48.807599  762988 system_pods.go:74] duration metric: took 190.132126ms to wait for pod list to return data ...
	I0920 18:49:48.807613  762988 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:49:48.991230  762988 request.go:632] Waited for 183.520385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:49:48.991298  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:49:48.991305  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.991315  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.991320  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.994457  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.994600  762988 default_sa.go:45] found service account: "default"
	I0920 18:49:48.994616  762988 default_sa.go:55] duration metric: took 186.997115ms for default service account to be created ...
	I0920 18:49:48.994626  762988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:49:49.191090  762988 request.go:632] Waited for 196.382893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:49.191150  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:49.191156  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:49.191167  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:49.191172  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:49.196609  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:49:49.203953  762988 system_pods.go:86] 24 kube-system pods found
	I0920 18:49:49.203984  762988 system_pods.go:89] "coredns-7c65d6cfc9-nfnkj" [7994989d-6bfa-4d25-b7b7-662d2e6c742c] Running
	I0920 18:49:49.203991  762988 system_pods.go:89] "coredns-7c65d6cfc9-rpcds" [7db58219-7147-4a45-b233-ef3c698566ef] Running
	I0920 18:49:49.203997  762988 system_pods.go:89] "etcd-ha-525790" [f23cd40e-ac8d-451b-9bf9-2ef5d62ef4b6] Running
	I0920 18:49:49.204001  762988 system_pods.go:89] "etcd-ha-525790-m02" [5a29103e-6da3-40d1-be3c-58fdc0f28b54] Running
	I0920 18:49:49.204005  762988 system_pods.go:89] "etcd-ha-525790-m03" [33df920f-e346-4613-af3b-67042a9db421] Running
	I0920 18:49:49.204010  762988 system_pods.go:89] "kindnet-8glgp" [f462782e-1ff6-410a-8359-de3360d380b0] Running
	I0920 18:49:49.204015  762988 system_pods.go:89] "kindnet-9qbm6" [87e8ae18-a561-48ec-9835-27446b6917d3] Running
	I0920 18:49:49.204020  762988 system_pods.go:89] "kindnet-j5mmq" [9ecd60f9-bfbf-4292-8449-869dd3afa02c] Running
	I0920 18:49:49.204026  762988 system_pods.go:89] "kube-apiserver-ha-525790" [0e3563fd-5185-4dc6-8d9b-a7d954b96c8d] Running
	I0920 18:49:49.204033  762988 system_pods.go:89] "kube-apiserver-ha-525790-m02" [b3966e2e-ce3d-4916-b73c-0d80cd1793f0] Running
	I0920 18:49:49.204042  762988 system_pods.go:89] "kube-apiserver-ha-525790-m03" [7649543a-3c54-4627-8a0a-bc1945712ad7] Running
	I0920 18:49:49.204048  762988 system_pods.go:89] "kube-controller-manager-ha-525790" [1d695853-6a7e-487d-a52b-9aceb1fc9ff3] Running
	I0920 18:49:49.204061  762988 system_pods.go:89] "kube-controller-manager-ha-525790-m02" [090c1833-3800-4e13-b9a7-c03680f3d55d] Running
	I0920 18:49:49.204067  762988 system_pods.go:89] "kube-controller-manager-ha-525790-m03" [5e675da3-2dd4-417a-a6f8-d4fe90da0ac0] Running
	I0920 18:49:49.204073  762988 system_pods.go:89] "kube-proxy-958jz" [46603403-eb82-4f15-a1da-da62194a072f] Running
	I0920 18:49:49.204081  762988 system_pods.go:89] "kube-proxy-dx9pg" [aa873f4e-a8f0-49ab-95e9-d81d15b650f5] Running
	I0920 18:49:49.204086  762988 system_pods.go:89] "kube-proxy-sspfs" [15203515-fc45-4624-b97e-8ec247f01e2d] Running
	I0920 18:49:49.204093  762988 system_pods.go:89] "kube-scheduler-ha-525790" [8cb7e23e-c1d1-4753-9758-b17ef9fd08d7] Running
	I0920 18:49:49.204097  762988 system_pods.go:89] "kube-scheduler-ha-525790-m02" [dc9a5561-5d41-445d-a0ba-de3b2405f821] Running
	I0920 18:49:49.204103  762988 system_pods.go:89] "kube-scheduler-ha-525790-m03" [729fa556-4301-49a9-8ed0-506ecb3a8b76] Running
	I0920 18:49:49.204107  762988 system_pods.go:89] "kube-vip-ha-525790" [0b318b1e-7a85-4c8c-8a5a-2fee226d7702] Running
	I0920 18:49:49.204115  762988 system_pods.go:89] "kube-vip-ha-525790-m02" [f2316231-5c1d-4bf2-ae62-5a4202b5818b] Running
	I0920 18:49:49.204121  762988 system_pods.go:89] "kube-vip-ha-525790-m03" [3050094c-de2a-449f-866c-0e8ddceb697d] Running
	I0920 18:49:49.204127  762988 system_pods.go:89] "storage-provisioner" [ea6bf34f-c1f7-4216-a61f-be30846c991b] Running
	I0920 18:49:49.204137  762988 system_pods.go:126] duration metric: took 209.50314ms to wait for k8s-apps to be running ...
	I0920 18:49:49.204149  762988 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:49:49.204205  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:49:49.220678  762988 system_svc.go:56] duration metric: took 16.519226ms WaitForService to wait for kubelet
	I0920 18:49:49.220713  762988 kubeadm.go:582] duration metric: took 20.736522024s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:49:49.220737  762988 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:49:49.391073  762988 request.go:632] Waited for 170.223638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes
	I0920 18:49:49.391144  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes
	I0920 18:49:49.391152  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:49.391163  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:49.391185  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:49.395131  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:49.396058  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:49:49.396082  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:49:49.396097  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:49:49.396102  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:49:49.396107  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:49:49.396112  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:49:49.396118  762988 node_conditions.go:105] duration metric: took 175.374616ms to run NodePressure ...
	I0920 18:49:49.396133  762988 start.go:241] waiting for startup goroutines ...
	I0920 18:49:49.396165  762988 start.go:255] writing updated cluster config ...
	I0920 18:49:49.396463  762988 ssh_runner.go:195] Run: rm -f paused
	I0920 18:49:49.451056  762988 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:49:49.453054  762988 out.go:177] * Done! kubectl is now configured to use "ha-525790" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.548575949Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a72f331c-4baa-4ab9-8765-be537506feb4 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.549616900Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f555d1b1-a02b-463b-a8f7-6a2129761342 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.550031630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858405550010432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f555d1b1-a02b-463b-a8f7-6a2129761342 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.550712340Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15cb53f9-0c7e-47cf-8b98-f153c762233c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.550814165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15cb53f9-0c7e-47cf-8b98-f153c762233c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.551052533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858192106122080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90,PodSandboxId:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858057039915142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056980739363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056983536331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6b
fa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268580
44669106744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858044313140306,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc,PodSandboxId:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858035446566658,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb,PodSandboxId:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858033110408572,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858033123239626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858033076459540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72,PodSandboxId:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858033054127944,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15cb53f9-0c7e-47cf-8b98-f153c762233c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.595150591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09b4ebff-09cf-425a-a0a1-a94d66a4a2a5 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.595240475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09b4ebff-09cf-425a-a0a1-a94d66a4a2a5 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.596657096Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9389fc0c-ba1f-49fd-aace-2e6aa6b98ff6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.597330695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858405597248127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9389fc0c-ba1f-49fd-aace-2e6aa6b98ff6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.597927223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cefcfaf5-5a6f-4dcd-a88b-e9ff6ac7cb0d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.598090607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cefcfaf5-5a6f-4dcd-a88b-e9ff6ac7cb0d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.598438935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858192106122080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90,PodSandboxId:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858057039915142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056980739363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056983536331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6b
fa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268580
44669106744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858044313140306,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc,PodSandboxId:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858035446566658,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb,PodSandboxId:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858033110408572,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858033123239626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858033076459540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72,PodSandboxId:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858033054127944,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cefcfaf5-5a6f-4dcd-a88b-e9ff6ac7cb0d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.617593258Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f42ea7f3-23ba-4c7c-a368-5bda17cf470b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.617961357Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-z26jr,Uid:3a3cda3d-ccab-4483-98e6-50d779cc3354,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726858190692240668,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:49:50.378606577Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ea6bf34f-c1f7-4216-a61f-be30846c991b,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1726858056755228001,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T18:47:36.445299882Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nfnkj,Uid:7994989d-6bfa-4d25-b7b7-662d2e6c742c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726858056748003547,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:36.440226200Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rpcds,Uid:7db58219-7147-4a45-b233-ef3c698566ef,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1726858056743924756,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:36.433422835Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&PodSandboxMetadata{Name:kindnet-9qbm6,Uid:87e8ae18-a561-48ec-9835-27446b6917d3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726858044173674425,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:23.865527140Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&PodSandboxMetadata{Name:kube-proxy-958jz,Uid:46603403-eb82-4f15-a1da-da62194a072f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726858044156236050,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:23.840921604Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-525790,Uid:ede9a5fdac3bc6f58bd35cff44d56d88,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1726858032860424246,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{kubernetes.io/config.hash: ede9a5fdac3bc6f58bd35cff44d56d88,kubernetes.io/config.seen: 2024-09-20T18:47:12.380325613Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-525790,Uid:b5b17991bc76439c3c561e1834ba5b98,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726858032856363299,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b5b1
7991bc76439c3c561e1834ba5b98,kubernetes.io/config.seen: 2024-09-20T18:47:12.380324762Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-525790,Uid:09c07a212745d10d359109606d1f8e5a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726858032851693889,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.149:8443,kubernetes.io/config.hash: 09c07a212745d10d359109606d1f8e5a,kubernetes.io/config.seen: 2024-09-20T18:47:12.380322381Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Met
adata:&PodSandboxMetadata{Name:kube-controller-manager-ha-525790,Uid:fa36b1aee3057cc6a6644c2a2b2b9582,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726858032849341007,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fa36b1aee3057cc6a6644c2a2b2b9582,kubernetes.io/config.seen: 2024-09-20T18:47:12.380323596Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&PodSandboxMetadata{Name:etcd-ha-525790,Uid:a2b3e6b5917d1f11b27828fbc85076e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726858032825529828,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-525790,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.149:2379,kubernetes.io/config.hash: a2b3e6b5917d1f11b27828fbc85076e4,kubernetes.io/config.seen: 2024-09-20T18:47:12.380318617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f42ea7f3-23ba-4c7c-a368-5bda17cf470b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.618789377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12648cd0-f40a-4dd5-8d68-d67c3ca27609 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.618860590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12648cd0-f40a-4dd5-8d68-d67c3ca27609 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.619164508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858192106122080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90,PodSandboxId:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858057039915142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056980739363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056983536331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6b
fa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268580
44669106744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858044313140306,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc,PodSandboxId:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858035446566658,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb,PodSandboxId:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858033110408572,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858033123239626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858033076459540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72,PodSandboxId:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858033054127944,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12648cd0-f40a-4dd5-8d68-d67c3ca27609 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.643911142Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9261c67-2fa1-40d2-86e3-ded7e585b18f name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.644009622Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9261c67-2fa1-40d2-86e3-ded7e585b18f name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.645084535Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12c7da69-c67e-4784-868e-305ecfdfef23 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.645580217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858405645555990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12c7da69-c67e-4784-868e-305ecfdfef23 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.646397809Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd9f23d7-f967-40f4-9267-2170d631f57f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.646449532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd9f23d7-f967-40f4-9267-2170d631f57f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:25 ha-525790 crio[657]: time="2024-09-20 18:53:25.646722820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858192106122080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90,PodSandboxId:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858057039915142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056980739363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056983536331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6b
fa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268580
44669106744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858044313140306,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc,PodSandboxId:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858035446566658,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb,PodSandboxId:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858033110408572,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858033123239626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858033076459540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72,PodSandboxId:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858033054127944,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd9f23d7-f967-40f4-9267-2170d631f57f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	344b03b51dddb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   125671e39b996       busybox-7dff88458-z26jr
	57fdde7a007ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   f2f3faeb3feb3       storage-provisioner
	172e8f75d2a84       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   5dbd6acffd5c5       coredns-7c65d6cfc9-nfnkj
	3dff404b6ad2a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   34517f9f64c86       coredns-7c65d6cfc9-rpcds
	5579930bef0fc       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   64136f65f6d34       kindnet-9qbm6
	3d469134674c2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   2e440a5ac73b7       kube-proxy-958jz
	c704a3be19bcb       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   afc309e0288a6       kube-vip-ha-525790
	7d0496391eb85       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   fae09dfcf3d6f       kube-scheduler-ha-525790
	1196adfd11996       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   4ed8fcb6c5197       kube-apiserver-ha-525790
	bcca29b119984       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   17818940c2036       etcd-ha-525790
	49582cb9e0724       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   ee2f4d881a424       kube-controller-manager-ha-525790
	
	
	==> coredns [172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1] <==
	[INFO] 10.244.0.4:52678 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000127756s
	[INFO] 10.244.1.2:49868 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196016s
	[INFO] 10.244.1.2:54874 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00387198s
	[INFO] 10.244.1.2:39870 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203758s
	[INFO] 10.244.1.2:47679 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000185456s
	[INFO] 10.244.1.2:49534 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164113s
	[INFO] 10.244.2.2:50032 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167479s
	[INFO] 10.244.2.2:33413 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001865571s
	[INFO] 10.244.0.4:38374 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010475s
	[INFO] 10.244.0.4:44676 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170058s
	[INFO] 10.244.0.4:54182 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123082s
	[INFO] 10.244.0.4:52067 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108075s
	[INFO] 10.244.1.2:36885 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133944s
	[INFO] 10.244.2.2:48327 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127372s
	[INFO] 10.244.2.2:52262 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160755s
	[INFO] 10.244.0.4:44171 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111758s
	[INFO] 10.244.1.2:36220 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196033s
	[INFO] 10.244.1.2:33859 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222322s
	[INFO] 10.244.1.2:55349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158431s
	[INFO] 10.244.2.2:37976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138385s
	[INFO] 10.244.2.2:56378 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000191303s
	[INFO] 10.244.2.2:54246 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117607s
	[INFO] 10.244.0.4:53115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116565s
	[INFO] 10.244.0.4:49608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095821s
	[INFO] 10.244.0.4:60862 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111997s
	
	
	==> coredns [3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e] <==
	[INFO] 10.244.0.4:45127 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001808433s
	[INFO] 10.244.1.2:43604 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003790448s
	[INFO] 10.244.1.2:40634 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000273503s
	[INFO] 10.244.1.2:53633 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177331s
	[INFO] 10.244.2.2:45376 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253726s
	[INFO] 10.244.2.2:42750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000311517s
	[INFO] 10.244.2.2:42748 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319529s
	[INFO] 10.244.2.2:49203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190348s
	[INFO] 10.244.2.2:44849 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019366s
	[INFO] 10.244.2.2:52186 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103082s
	[INFO] 10.244.0.4:58300 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140735s
	[INFO] 10.244.0.4:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001702673s
	[INFO] 10.244.0.4:33721 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001170599s
	[INFO] 10.244.0.4:42180 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061647s
	[INFO] 10.244.1.2:49177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333372s
	[INFO] 10.244.1.2:57192 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147894s
	[INFO] 10.244.1.2:59125 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095482s
	[INFO] 10.244.2.2:50879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019818s
	[INFO] 10.244.2.2:47467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096359s
	[INFO] 10.244.0.4:54464 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087148s
	[INFO] 10.244.0.4:40326 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011895s
	[INFO] 10.244.0.4:46142 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071583s
	[INFO] 10.244.1.2:50168 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000224622s
	[INFO] 10.244.2.2:50611 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117577s
	[INFO] 10.244.0.4:57391 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000320119s
	
	
	==> describe nodes <==
	Name:               ha-525790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_47_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:47:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:53:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:50:23 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:50:23 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:50:23 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:50:23 +0000   Fri, 20 Sep 2024 18:47:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-525790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3f2b96a8819496a94e034cf4adf7a85
	  System UUID:                d3f2b96a-8819-496a-94e0-34cf4adf7a85
	  Boot ID:                    02f79ecd-567f-4683-83ce-59afb46feab6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z26jr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 coredns-7c65d6cfc9-nfnkj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m1s
	  kube-system                 coredns-7c65d6cfc9-rpcds             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m1s
	  kube-system                 etcd-ha-525790                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m6s
	  kube-system                 kindnet-9qbm6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m2s
	  kube-system                 kube-apiserver-ha-525790             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-controller-manager-ha-525790    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-proxy-958jz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-ha-525790             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-vip-ha-525790                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m1s   kube-proxy       
	  Normal  Starting                 6m6s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m6s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m6s   kubelet          Node ha-525790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s   kubelet          Node ha-525790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s   kubelet          Node ha-525790 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m2s   node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal  NodeReady                5m49s  kubelet          Node ha-525790 status is now: NodeReady
	  Normal  RegisteredNode           5m3s   node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal  RegisteredNode           3m52s  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	
	
	Name:               ha-525790-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_48_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:48:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:50:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 18:50:16 +0000   Fri, 20 Sep 2024 18:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 18:50:16 +0000   Fri, 20 Sep 2024 18:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 18:50:16 +0000   Fri, 20 Sep 2024 18:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 18:50:16 +0000   Fri, 20 Sep 2024 18:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-525790-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dbde4511fc24bbcb1281f7b7d6ff24f
	  System UUID:                1dbde451-1fc2-4bbc-b128-1f7b7d6ff24f
	  Boot ID:                    9ec76d35-ca9a-483c-b479-9d99ec8feedc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7jtss                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 etcd-ha-525790-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m11s
	  kube-system                 kindnet-8glgp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m13s
	  kube-system                 kube-apiserver-ha-525790-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-controller-manager-ha-525790-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-proxy-sspfs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-scheduler-ha-525790-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-vip-ha-525790-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node ha-525790-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node ha-525790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x7 over 5m13s)  kubelet          Node ha-525790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           5m4s                   node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  NodeNotReady             108s                   node-controller  Node ha-525790-m02 status is now: NodeNotReady
	
	
	Name:               ha-525790-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_49_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:49:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:53:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:49:54 +0000   Fri, 20 Sep 2024 18:49:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:49:54 +0000   Fri, 20 Sep 2024 18:49:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:49:54 +0000   Fri, 20 Sep 2024 18:49:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:49:54 +0000   Fri, 20 Sep 2024 18:49:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-525790-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 007556c5fa674bcd927152e3b0cca9b2
	  System UUID:                007556c5-fa67-4bcd-9271-52e3b0cca9b2
	  Boot ID:                    2d4db773-7cb0-4bef-b28d-d6863649acb9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jmx4g                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 etcd-ha-525790-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m
	  kube-system                 kindnet-j5mmq                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m2s
	  kube-system                 kube-apiserver-ha-525790-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-controller-manager-ha-525790-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 kube-proxy-dx9pg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-scheduler-ha-525790-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 kube-vip-ha-525790-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m56s                kube-proxy       
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m2s (x8 over 4m3s)  kubelet          Node ha-525790-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x8 over 4m3s)  kubelet          Node ha-525790-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x7 over 4m3s)  kubelet          Node ha-525790-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	  Normal  RegisteredNode           3m53s                node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	
	
	Name:               ha-525790-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_50_26_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:50:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:53:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:50:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:50:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:50:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:50:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-525790-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c58d814e5e5d49b699d9f977eb54ff58
	  System UUID:                c58d814e-5e5d-49b6-99d9-f977eb54ff58
	  Boot ID:                    69924ac5-b6f2-4ddd-bd0d-fa3c683681d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-df8hf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-w98cx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m55s              kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m1s)  kubelet          Node ha-525790-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m1s)  kubelet          Node ha-525790-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m1s)  kubelet          Node ha-525790-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s              node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           2m58s              node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           2m58s              node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  NodeReady                2m42s              kubelet          Node ha-525790-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 18:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049615] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041335] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.781215] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.493789] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.593513] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep20 18:47] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.053987] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058272] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.180542] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.143015] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.280287] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.923962] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +3.905808] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.064972] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.290695] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.091789] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.472602] kauditd_printk_skb: 36 callbacks suppressed
	[ +11.974718] kauditd_printk_skb: 23 callbacks suppressed
	[Sep20 18:48] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93] <==
	{"level":"warn","ts":"2024-09-20T18:53:25.939114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:25.944172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:25.962706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:25.970411Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:25.976382Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:25.988606Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:25.997518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.003482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.015481Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.027489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.059207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.064118Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.067222Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.074589Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.076208Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.080806Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.086682Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.089984Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.092686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.096563Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.110395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.111472Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.112607Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.118377Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:26.176738Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:53:26 up 6 min,  0 users,  load average: 0.39, 0.22, 0.11
	Linux ha-525790 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98] <==
	I0920 18:52:55.880188       1 main.go:299] handling current node
	I0920 18:53:05.885518       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:53:05.885574       1 main.go:299] handling current node
	I0920 18:53:05.885598       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:53:05.885604       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:53:05.885762       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:53:05.885786       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:53:05.885836       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:53:05.885842       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 18:53:15.886307       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:53:15.886406       1 main.go:299] handling current node
	I0920 18:53:15.886461       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:53:15.886488       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:53:15.886631       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:53:15.886653       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:53:15.886712       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:53:15.886731       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 18:53:25.880388       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:53:25.880418       1 main.go:299] handling current node
	I0920 18:53:25.880431       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:53:25.880437       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:53:25.880623       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:53:25.880629       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:53:25.880667       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:53:25.880672       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb] <==
	W0920 18:47:18.009766       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149]
	I0920 18:47:18.010784       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:47:18.015641       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:47:18.249854       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:47:19.683867       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:47:19.709897       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 18:47:19.867045       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:47:23.355786       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0920 18:47:23.802179       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0920 18:49:53.563053       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46192: use of closed network connection
	E0920 18:49:53.772052       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46208: use of closed network connection
	E0920 18:49:53.971905       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46230: use of closed network connection
	E0920 18:49:54.183484       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46258: use of closed network connection
	E0920 18:49:54.358996       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46282: use of closed network connection
	E0920 18:49:54.568631       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46306: use of closed network connection
	E0920 18:49:54.751815       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46320: use of closed network connection
	E0920 18:49:54.931094       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46346: use of closed network connection
	E0920 18:49:55.134164       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46362: use of closed network connection
	E0920 18:49:55.422343       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46396: use of closed network connection
	E0920 18:49:55.606742       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46420: use of closed network connection
	E0920 18:49:55.788879       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46442: use of closed network connection
	E0920 18:49:55.968453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46450: use of closed network connection
	E0920 18:49:56.152146       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46460: use of closed network connection
	E0920 18:49:56.335452       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46464: use of closed network connection
	W0920 18:51:07.982250       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.105 192.168.39.149]
	
	
	==> kube-controller-manager [49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72] <==
	I0920 18:50:26.211532       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-525790-m04" podCIDRs=["10.244.3.0/24"]
	I0920 18:50:26.211587       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:26.211616       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:26.225025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:26.521754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:26.959047       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:27.339450       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:28.189228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:28.189762       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-525790-m04"
	I0920 18:50:28.268460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:28.721421       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:28.749109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:36.536189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:44.973968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:44.974514       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-525790-m04"
	I0920 18:50:44.992518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:47.269588       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:57.000828       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:51:38.216141       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-525790-m04"
	I0920 18:51:38.216594       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 18:51:38.240377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 18:51:38.269433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.448755ms"
	I0920 18:51:38.269538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.42µs"
	I0920 18:51:38.804819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 18:51:43.466404       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	
	
	==> kube-proxy [3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:47:24.817372       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:47:24.843820       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.149"]
	E0920 18:47:24.843948       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:47:24.955225       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:47:24.955317       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:47:24.955347       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:47:24.958548       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:47:24.959874       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:47:24.959905       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:47:24.962813       1 config.go:199] "Starting service config controller"
	I0920 18:47:24.965782       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:47:24.965817       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:47:24.968165       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:47:24.968295       1 config.go:328] "Starting node config controller"
	I0920 18:47:24.968302       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:47:25.067459       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:47:25.068474       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:47:25.068496       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706] <==
	E0920 18:49:50.397182       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jmx4g\": pod busybox-7dff88458-jmx4g is already assigned to node \"ha-525790-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jmx4g" node="ha-525790-m03"
	E0920 18:49:50.397248       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 223d79ec-368f-47a1-aa7b-26d153195e57(default/busybox-7dff88458-jmx4g) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-jmx4g"
	E0920 18:49:50.397330       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jmx4g\": pod busybox-7dff88458-jmx4g is already assigned to node \"ha-525790-m03\"" pod="default/busybox-7dff88458-jmx4g"
	I0920 18:49:50.397369       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jmx4g" node="ha-525790-m03"
	E0920 18:49:50.409140       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z26jr\": pod busybox-7dff88458-z26jr is already assigned to node \"ha-525790\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-z26jr" node="ha-525790"
	E0920 18:49:50.409195       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3a3cda3d-ccab-4483-98e6-50d779cc3354(default/busybox-7dff88458-z26jr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-z26jr"
	E0920 18:49:50.409213       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z26jr\": pod busybox-7dff88458-z26jr is already assigned to node \"ha-525790\"" pod="default/busybox-7dff88458-z26jr"
	I0920 18:49:50.409243       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-z26jr" node="ha-525790"
	E0920 18:49:50.532066       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-pt85x is already present in the active queue" pod="default/busybox-7dff88458-pt85x"
	E0920 18:50:26.262797       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fz5b4\": pod kindnet-fz5b4 is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fz5b4" node="ha-525790-m04"
	E0920 18:50:26.262881       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e8309f8d-3b06-4e9f-9bad-e0745dd2b30c(kube-system/kindnet-fz5b4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fz5b4"
	E0920 18:50:26.262903       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fz5b4\": pod kindnet-fz5b4 is already assigned to node \"ha-525790-m04\"" pod="kube-system/kindnet-fz5b4"
	I0920 18:50:26.262924       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fz5b4" node="ha-525790-m04"
	E0920 18:50:26.263223       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-w98cx\": pod kube-proxy-w98cx is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-w98cx" node="ha-525790-m04"
	E0920 18:50:26.263412       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cd3e68cf-e7ed-47fc-ae4b-c701394a8c1f(kube-system/kube-proxy-w98cx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-w98cx"
	E0920 18:50:26.263548       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-w98cx\": pod kube-proxy-w98cx is already assigned to node \"ha-525790-m04\"" pod="kube-system/kube-proxy-w98cx"
	I0920 18:50:26.263699       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w98cx" node="ha-525790-m04"
	E0920 18:50:26.297985       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298064       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9ff40332-cdad-4e9f-99ca-28d1271713a8(kube-system/kindnet-hwgsh) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-hwgsh"
	E0920 18:50:26.298079       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" pod="kube-system/kindnet-hwgsh"
	I0920 18:50:26.298095       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298461       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	E0920 18:50:26.298512       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 340d5abf-2e79-4cc0-8f1f-130c1e176259(kube-system/kube-proxy-rh89s) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rh89s"
	E0920 18:50:26.298529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" pod="kube-system/kube-proxy-rh89s"
	I0920 18:50:26.298548       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	
	
	==> kubelet <==
	Sep 20 18:52:09 ha-525790 kubelet[1305]: E0920 18:52:09.760057    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858329758979059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:19 ha-525790 kubelet[1305]: E0920 18:52:19.642485    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:52:19 ha-525790 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:52:19 ha-525790 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:52:19 ha-525790 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:52:19 ha-525790 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:52:19 ha-525790 kubelet[1305]: E0920 18:52:19.763018    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858339762333652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:19 ha-525790 kubelet[1305]: E0920 18:52:19.763043    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858339762333652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:29 ha-525790 kubelet[1305]: E0920 18:52:29.765356    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858349764077984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:29 ha-525790 kubelet[1305]: E0920 18:52:29.765949    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858349764077984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:39 ha-525790 kubelet[1305]: E0920 18:52:39.767540    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858359766941786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:39 ha-525790 kubelet[1305]: E0920 18:52:39.767585    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858359766941786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:49 ha-525790 kubelet[1305]: E0920 18:52:49.770639    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858369770138616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:49 ha-525790 kubelet[1305]: E0920 18:52:49.770662    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858369770138616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:59 ha-525790 kubelet[1305]: E0920 18:52:59.772133    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858379771879043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:59 ha-525790 kubelet[1305]: E0920 18:52:59.772178    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858379771879043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:09 ha-525790 kubelet[1305]: E0920 18:53:09.773633    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858389773387666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:09 ha-525790 kubelet[1305]: E0920 18:53:09.773655    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858389773387666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:19 ha-525790 kubelet[1305]: E0920 18:53:19.642578    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:53:19 ha-525790 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:53:19 ha-525790 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:53:19 ha-525790 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:53:19 ha-525790 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:53:19 ha-525790 kubelet[1305]: E0920 18:53:19.776518    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858399775795306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:19 ha-525790 kubelet[1305]: E0920 18:53:19.776559    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858399775795306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-525790 -n ha-525790
helpers_test.go:261: (dbg) Run:  kubectl --context ha-525790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.386411505s)
ha_test.go:413: expected profile "ha-525790" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-525790\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-525790\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-525790\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.149\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.246\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.105\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.181\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,
\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize
\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-525790 -n ha-525790
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-525790 logs -n 25: (1.377951354s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790:/home/docker/cp-test_ha-525790-m03_ha-525790.txt                       |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790 sudo cat                                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790.txt                                 |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m02:/home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m04 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp testdata/cp-test.txt                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790:/home/docker/cp-test_ha-525790-m04_ha-525790.txt                       |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790 sudo cat                                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790.txt                                 |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m02:/home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03:/home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m03 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-525790 node stop m02 -v=7                                                     | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:46:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:46:38.789149  762988 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:46:38.789304  762988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:46:38.789316  762988 out.go:358] Setting ErrFile to fd 2...
	I0920 18:46:38.789323  762988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:46:38.789530  762988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:46:38.790164  762988 out.go:352] Setting JSON to false
	I0920 18:46:38.791213  762988 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8949,"bootTime":1726849050,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:46:38.791325  762988 start.go:139] virtualization: kvm guest
	I0920 18:46:38.794321  762988 out.go:177] * [ha-525790] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:46:38.795880  762988 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:46:38.795921  762988 notify.go:220] Checking for updates...
	I0920 18:46:38.798815  762988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:46:38.800212  762988 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:46:38.801657  762988 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:46:38.802936  762988 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:46:38.804312  762988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:46:38.805745  762988 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:46:38.840721  762988 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:46:38.841998  762988 start.go:297] selected driver: kvm2
	I0920 18:46:38.842017  762988 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:46:38.842030  762988 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:46:38.842791  762988 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:46:38.842923  762988 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:46:38.857953  762988 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:46:38.858007  762988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:46:38.858244  762988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:46:38.858274  762988 cni.go:84] Creating CNI manager for ""
	I0920 18:46:38.858324  762988 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 18:46:38.858332  762988 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:46:38.858385  762988 start.go:340] cluster config:
	{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0920 18:46:38.858482  762988 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:46:38.861017  762988 out.go:177] * Starting "ha-525790" primary control-plane node in "ha-525790" cluster
	I0920 18:46:38.862480  762988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:46:38.862534  762988 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:46:38.862548  762988 cache.go:56] Caching tarball of preloaded images
	I0920 18:46:38.862674  762988 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:46:38.862687  762988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:46:38.863061  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:46:38.863096  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json: {Name:mk5c775b0f6d6c9cf399952e81d482461c2f3276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:46:38.863265  762988 start.go:360] acquireMachinesLock for ha-525790: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:46:38.863304  762988 start.go:364] duration metric: took 22.887µs to acquireMachinesLock for "ha-525790"
	I0920 18:46:38.863326  762988 start.go:93] Provisioning new machine with config: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:46:38.863386  762988 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:46:38.865997  762988 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:46:38.866141  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:46:38.866188  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:46:38.881131  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0920 18:46:38.881605  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:46:38.882180  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:46:38.882202  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:46:38.882573  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:46:38.882762  762988 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:46:38.882960  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:46:38.883106  762988 start.go:159] libmachine.API.Create for "ha-525790" (driver="kvm2")
	I0920 18:46:38.883131  762988 client.go:168] LocalClient.Create starting
	I0920 18:46:38.883164  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:46:38.883195  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:46:38.883209  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:46:38.883266  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:46:38.883283  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:46:38.883293  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:46:38.883309  762988 main.go:141] libmachine: Running pre-create checks...
	I0920 18:46:38.883317  762988 main.go:141] libmachine: (ha-525790) Calling .PreCreateCheck
	I0920 18:46:38.883674  762988 main.go:141] libmachine: (ha-525790) Calling .GetConfigRaw
	I0920 18:46:38.884046  762988 main.go:141] libmachine: Creating machine...
	I0920 18:46:38.884058  762988 main.go:141] libmachine: (ha-525790) Calling .Create
	I0920 18:46:38.884186  762988 main.go:141] libmachine: (ha-525790) Creating KVM machine...
	I0920 18:46:38.885388  762988 main.go:141] libmachine: (ha-525790) DBG | found existing default KVM network
	I0920 18:46:38.886155  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:38.886012  763011 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bb0}
	I0920 18:46:38.886212  762988 main.go:141] libmachine: (ha-525790) DBG | created network xml: 
	I0920 18:46:38.886231  762988 main.go:141] libmachine: (ha-525790) DBG | <network>
	I0920 18:46:38.886238  762988 main.go:141] libmachine: (ha-525790) DBG |   <name>mk-ha-525790</name>
	I0920 18:46:38.886242  762988 main.go:141] libmachine: (ha-525790) DBG |   <dns enable='no'/>
	I0920 18:46:38.886247  762988 main.go:141] libmachine: (ha-525790) DBG |   
	I0920 18:46:38.886265  762988 main.go:141] libmachine: (ha-525790) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:46:38.886272  762988 main.go:141] libmachine: (ha-525790) DBG |     <dhcp>
	I0920 18:46:38.886279  762988 main.go:141] libmachine: (ha-525790) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:46:38.886301  762988 main.go:141] libmachine: (ha-525790) DBG |     </dhcp>
	I0920 18:46:38.886355  762988 main.go:141] libmachine: (ha-525790) DBG |   </ip>
	I0920 18:46:38.886369  762988 main.go:141] libmachine: (ha-525790) DBG |   
	I0920 18:46:38.886374  762988 main.go:141] libmachine: (ha-525790) DBG | </network>
	I0920 18:46:38.886382  762988 main.go:141] libmachine: (ha-525790) DBG | 
	I0920 18:46:38.891425  762988 main.go:141] libmachine: (ha-525790) DBG | trying to create private KVM network mk-ha-525790 192.168.39.0/24...
	I0920 18:46:38.955444  762988 main.go:141] libmachine: (ha-525790) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790 ...
	I0920 18:46:38.955497  762988 main.go:141] libmachine: (ha-525790) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:46:38.955509  762988 main.go:141] libmachine: (ha-525790) DBG | private KVM network mk-ha-525790 192.168.39.0/24 created
	I0920 18:46:38.955527  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:38.955388  763011 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:46:38.955546  762988 main.go:141] libmachine: (ha-525790) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:46:39.243592  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:39.243485  763011 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa...
	I0920 18:46:39.608366  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:39.608221  763011 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/ha-525790.rawdisk...
	I0920 18:46:39.608404  762988 main.go:141] libmachine: (ha-525790) DBG | Writing magic tar header
	I0920 18:46:39.608446  762988 main.go:141] libmachine: (ha-525790) DBG | Writing SSH key tar header
	I0920 18:46:39.608516  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:39.608475  763011 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790 ...
	I0920 18:46:39.608599  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790
	I0920 18:46:39.608627  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790 (perms=drwx------)
	I0920 18:46:39.608656  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:46:39.608670  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:46:39.608683  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:46:39.608695  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:46:39.608706  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:46:39.608718  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:46:39.608730  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:46:39.608740  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home
	I0920 18:46:39.608750  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:46:39.608763  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:46:39.608777  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:46:39.608788  762988 main.go:141] libmachine: (ha-525790) Creating domain...
	I0920 18:46:39.608796  762988 main.go:141] libmachine: (ha-525790) DBG | Skipping /home - not owner
	I0920 18:46:39.609887  762988 main.go:141] libmachine: (ha-525790) define libvirt domain using xml: 
	I0920 18:46:39.609929  762988 main.go:141] libmachine: (ha-525790) <domain type='kvm'>
	I0920 18:46:39.609936  762988 main.go:141] libmachine: (ha-525790)   <name>ha-525790</name>
	I0920 18:46:39.609941  762988 main.go:141] libmachine: (ha-525790)   <memory unit='MiB'>2200</memory>
	I0920 18:46:39.609946  762988 main.go:141] libmachine: (ha-525790)   <vcpu>2</vcpu>
	I0920 18:46:39.609950  762988 main.go:141] libmachine: (ha-525790)   <features>
	I0920 18:46:39.609954  762988 main.go:141] libmachine: (ha-525790)     <acpi/>
	I0920 18:46:39.609958  762988 main.go:141] libmachine: (ha-525790)     <apic/>
	I0920 18:46:39.609963  762988 main.go:141] libmachine: (ha-525790)     <pae/>
	I0920 18:46:39.609972  762988 main.go:141] libmachine: (ha-525790)     
	I0920 18:46:39.609977  762988 main.go:141] libmachine: (ha-525790)   </features>
	I0920 18:46:39.609981  762988 main.go:141] libmachine: (ha-525790)   <cpu mode='host-passthrough'>
	I0920 18:46:39.609988  762988 main.go:141] libmachine: (ha-525790)   
	I0920 18:46:39.609991  762988 main.go:141] libmachine: (ha-525790)   </cpu>
	I0920 18:46:39.609996  762988 main.go:141] libmachine: (ha-525790)   <os>
	I0920 18:46:39.610000  762988 main.go:141] libmachine: (ha-525790)     <type>hvm</type>
	I0920 18:46:39.610004  762988 main.go:141] libmachine: (ha-525790)     <boot dev='cdrom'/>
	I0920 18:46:39.610012  762988 main.go:141] libmachine: (ha-525790)     <boot dev='hd'/>
	I0920 18:46:39.610034  762988 main.go:141] libmachine: (ha-525790)     <bootmenu enable='no'/>
	I0920 18:46:39.610055  762988 main.go:141] libmachine: (ha-525790)   </os>
	I0920 18:46:39.610063  762988 main.go:141] libmachine: (ha-525790)   <devices>
	I0920 18:46:39.610071  762988 main.go:141] libmachine: (ha-525790)     <disk type='file' device='cdrom'>
	I0920 18:46:39.610087  762988 main.go:141] libmachine: (ha-525790)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/boot2docker.iso'/>
	I0920 18:46:39.610097  762988 main.go:141] libmachine: (ha-525790)       <target dev='hdc' bus='scsi'/>
	I0920 18:46:39.610105  762988 main.go:141] libmachine: (ha-525790)       <readonly/>
	I0920 18:46:39.610111  762988 main.go:141] libmachine: (ha-525790)     </disk>
	I0920 18:46:39.610117  762988 main.go:141] libmachine: (ha-525790)     <disk type='file' device='disk'>
	I0920 18:46:39.610124  762988 main.go:141] libmachine: (ha-525790)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:46:39.610165  762988 main.go:141] libmachine: (ha-525790)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/ha-525790.rawdisk'/>
	I0920 18:46:39.610187  762988 main.go:141] libmachine: (ha-525790)       <target dev='hda' bus='virtio'/>
	I0920 18:46:39.610197  762988 main.go:141] libmachine: (ha-525790)     </disk>
	I0920 18:46:39.610210  762988 main.go:141] libmachine: (ha-525790)     <interface type='network'>
	I0920 18:46:39.610222  762988 main.go:141] libmachine: (ha-525790)       <source network='mk-ha-525790'/>
	I0920 18:46:39.610232  762988 main.go:141] libmachine: (ha-525790)       <model type='virtio'/>
	I0920 18:46:39.610240  762988 main.go:141] libmachine: (ha-525790)     </interface>
	I0920 18:46:39.610250  762988 main.go:141] libmachine: (ha-525790)     <interface type='network'>
	I0920 18:46:39.610258  762988 main.go:141] libmachine: (ha-525790)       <source network='default'/>
	I0920 18:46:39.610275  762988 main.go:141] libmachine: (ha-525790)       <model type='virtio'/>
	I0920 18:46:39.610283  762988 main.go:141] libmachine: (ha-525790)     </interface>
	I0920 18:46:39.610288  762988 main.go:141] libmachine: (ha-525790)     <serial type='pty'>
	I0920 18:46:39.610292  762988 main.go:141] libmachine: (ha-525790)       <target port='0'/>
	I0920 18:46:39.610299  762988 main.go:141] libmachine: (ha-525790)     </serial>
	I0920 18:46:39.610308  762988 main.go:141] libmachine: (ha-525790)     <console type='pty'>
	I0920 18:46:39.610326  762988 main.go:141] libmachine: (ha-525790)       <target type='serial' port='0'/>
	I0920 18:46:39.610338  762988 main.go:141] libmachine: (ha-525790)     </console>
	I0920 18:46:39.610349  762988 main.go:141] libmachine: (ha-525790)     <rng model='virtio'>
	I0920 18:46:39.610362  762988 main.go:141] libmachine: (ha-525790)       <backend model='random'>/dev/random</backend>
	I0920 18:46:39.610371  762988 main.go:141] libmachine: (ha-525790)     </rng>
	I0920 18:46:39.610375  762988 main.go:141] libmachine: (ha-525790)     
	I0920 18:46:39.610381  762988 main.go:141] libmachine: (ha-525790)     
	I0920 18:46:39.610387  762988 main.go:141] libmachine: (ha-525790)   </devices>
	I0920 18:46:39.610397  762988 main.go:141] libmachine: (ha-525790) </domain>
	I0920 18:46:39.610405  762988 main.go:141] libmachine: (ha-525790) 
	I0920 18:46:39.614486  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:50:2a:69 in network default
	I0920 18:46:39.615032  762988 main.go:141] libmachine: (ha-525790) Ensuring networks are active...
	I0920 18:46:39.615051  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:39.615715  762988 main.go:141] libmachine: (ha-525790) Ensuring network default is active
	I0920 18:46:39.616018  762988 main.go:141] libmachine: (ha-525790) Ensuring network mk-ha-525790 is active
	I0920 18:46:39.616415  762988 main.go:141] libmachine: (ha-525790) Getting domain xml...
	I0920 18:46:39.617025  762988 main.go:141] libmachine: (ha-525790) Creating domain...
	I0920 18:46:40.795742  762988 main.go:141] libmachine: (ha-525790) Waiting to get IP...
	I0920 18:46:40.796420  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:40.796852  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:40.796878  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:40.796826  763011 retry.go:31] will retry after 263.82587ms: waiting for machine to come up
	I0920 18:46:41.062273  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:41.062647  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:41.062678  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:41.062592  763011 retry.go:31] will retry after 386.712635ms: waiting for machine to come up
	I0920 18:46:41.451226  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:41.451632  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:41.451661  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:41.451579  763011 retry.go:31] will retry after 342.693912ms: waiting for machine to come up
	I0920 18:46:41.796191  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:41.796691  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:41.796715  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:41.796648  763011 retry.go:31] will retry after 576.710058ms: waiting for machine to come up
	I0920 18:46:42.375515  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:42.376036  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:42.376061  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:42.375999  763011 retry.go:31] will retry after 663.670245ms: waiting for machine to come up
	I0920 18:46:43.040735  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:43.041215  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:43.041246  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:43.041140  763011 retry.go:31] will retry after 597.358521ms: waiting for machine to come up
	I0920 18:46:43.639686  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:43.640007  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:43.640036  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:43.639963  763011 retry.go:31] will retry after 1.058911175s: waiting for machine to come up
	I0920 18:46:44.700947  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:44.701385  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:44.701413  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:44.701343  763011 retry.go:31] will retry after 1.038799294s: waiting for machine to come up
	I0920 18:46:45.741663  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:45.742102  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:45.742126  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:45.742045  763011 retry.go:31] will retry after 1.383433424s: waiting for machine to come up
	I0920 18:46:47.127537  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:47.128058  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:47.128078  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:47.127983  763011 retry.go:31] will retry after 1.617569351s: waiting for machine to come up
	I0920 18:46:48.747698  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:48.748209  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:48.748240  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:48.748143  763011 retry.go:31] will retry after 2.371010271s: waiting for machine to come up
	I0920 18:46:51.120964  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:51.121427  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:51.121458  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:51.121379  763011 retry.go:31] will retry after 2.200163157s: waiting for machine to come up
	I0920 18:46:53.322674  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:53.322965  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:53.322986  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:53.322923  763011 retry.go:31] will retry after 3.176543377s: waiting for machine to come up
	I0920 18:46:56.502595  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:56.502881  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:56.502907  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:56.502808  763011 retry.go:31] will retry after 5.194371334s: waiting for machine to come up
	I0920 18:47:01.701005  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.701389  762988 main.go:141] libmachine: (ha-525790) Found IP for machine: 192.168.39.149
	I0920 18:47:01.701409  762988 main.go:141] libmachine: (ha-525790) Reserving static IP address...
	I0920 18:47:01.701417  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has current primary IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.701762  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find host DHCP lease matching {name: "ha-525790", mac: "52:54:00:93:48:3a", ip: "192.168.39.149"} in network mk-ha-525790
	I0920 18:47:01.773329  762988 main.go:141] libmachine: (ha-525790) DBG | Getting to WaitForSSH function...
	I0920 18:47:01.773358  762988 main.go:141] libmachine: (ha-525790) Reserved static IP address: 192.168.39.149
	I0920 18:47:01.773388  762988 main.go:141] libmachine: (ha-525790) Waiting for SSH to be available...
	I0920 18:47:01.776048  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.776426  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:01.776463  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.776622  762988 main.go:141] libmachine: (ha-525790) DBG | Using SSH client type: external
	I0920 18:47:01.776646  762988 main.go:141] libmachine: (ha-525790) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa (-rw-------)
	I0920 18:47:01.776683  762988 main.go:141] libmachine: (ha-525790) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:47:01.776700  762988 main.go:141] libmachine: (ha-525790) DBG | About to run SSH command:
	I0920 18:47:01.776715  762988 main.go:141] libmachine: (ha-525790) DBG | exit 0
	I0920 18:47:01.898967  762988 main.go:141] libmachine: (ha-525790) DBG | SSH cmd err, output: <nil>: 
	I0920 18:47:01.899221  762988 main.go:141] libmachine: (ha-525790) KVM machine creation complete!
	I0920 18:47:01.899544  762988 main.go:141] libmachine: (ha-525790) Calling .GetConfigRaw
	I0920 18:47:01.900277  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:01.900493  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:01.900650  762988 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:47:01.900666  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:01.901918  762988 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:47:01.901931  762988 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:47:01.901936  762988 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:47:01.901941  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:01.904499  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.904882  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:01.904911  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.905023  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:01.905203  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:01.905333  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:01.905455  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:01.905648  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:01.905950  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:01.905967  762988 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:47:02.002303  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:47:02.002325  762988 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:47:02.002332  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.005206  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.005502  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.005524  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.005703  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.005932  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.006115  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.006265  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.006494  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.006725  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.006738  762988 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:47:02.103696  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:47:02.103818  762988 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:47:02.103834  762988 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:47:02.103845  762988 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:47:02.104117  762988 buildroot.go:166] provisioning hostname "ha-525790"
	I0920 18:47:02.104147  762988 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:47:02.104362  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.107026  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.107445  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.107466  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.107725  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.107909  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.108050  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.108218  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.108380  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.108558  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.108576  762988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790 && echo "ha-525790" | sudo tee /etc/hostname
	I0920 18:47:02.221193  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:47:02.221225  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.224188  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.224526  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.224548  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.224771  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.224973  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.225135  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.225274  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.225455  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.225692  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.225716  762988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:47:02.333039  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:47:02.333077  762988 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:47:02.333139  762988 buildroot.go:174] setting up certificates
	I0920 18:47:02.333156  762988 provision.go:84] configureAuth start
	I0920 18:47:02.333175  762988 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:47:02.333477  762988 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:47:02.336179  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.336437  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.336466  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.336621  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.338903  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.339190  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.339228  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.339347  762988 provision.go:143] copyHostCerts
	I0920 18:47:02.339388  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:47:02.339428  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:47:02.339443  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:47:02.339511  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:47:02.339645  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:47:02.339667  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:47:02.339674  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:47:02.339705  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:47:02.339762  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:47:02.339781  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:47:02.339788  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:47:02.339812  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:47:02.339874  762988 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790 san=[127.0.0.1 192.168.39.149 ha-525790 localhost minikube]
	I0920 18:47:02.453692  762988 provision.go:177] copyRemoteCerts
	I0920 18:47:02.453777  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:47:02.453804  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.456622  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.456981  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.457012  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.457155  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.457322  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.457514  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.457694  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:02.537102  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:47:02.537192  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:47:02.561583  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:47:02.561653  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0920 18:47:02.584887  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:47:02.584963  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:47:02.607882  762988 provision.go:87] duration metric: took 274.708599ms to configureAuth
	I0920 18:47:02.607913  762988 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:47:02.608135  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:02.608263  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.610585  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.610941  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.610966  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.611170  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.611364  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.611566  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.611733  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.611901  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.612097  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.612128  762988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:47:02.825619  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:47:02.825649  762988 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:47:02.825670  762988 main.go:141] libmachine: (ha-525790) Calling .GetURL
	I0920 18:47:02.826777  762988 main.go:141] libmachine: (ha-525790) DBG | Using libvirt version 6000000
	I0920 18:47:02.828685  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.829016  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.829041  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.829240  762988 main.go:141] libmachine: Docker is up and running!
	I0920 18:47:02.829256  762988 main.go:141] libmachine: Reticulating splines...
	I0920 18:47:02.829269  762988 client.go:171] duration metric: took 23.94612541s to LocalClient.Create
	I0920 18:47:02.829292  762988 start.go:167] duration metric: took 23.946187981s to libmachine.API.Create "ha-525790"
	I0920 18:47:02.829302  762988 start.go:293] postStartSetup for "ha-525790" (driver="kvm2")
	I0920 18:47:02.829311  762988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:47:02.829329  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:02.829550  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:47:02.829607  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.831515  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.831740  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.831770  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.831871  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.832029  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.832155  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.832317  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:02.912925  762988 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:47:02.917265  762988 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:47:02.917289  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:47:02.917365  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:47:02.917439  762988 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:47:02.917449  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:47:02.917538  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:47:02.926976  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:47:02.950998  762988 start.go:296] duration metric: took 121.680006ms for postStartSetup
	I0920 18:47:02.951052  762988 main.go:141] libmachine: (ha-525790) Calling .GetConfigRaw
	I0920 18:47:02.951761  762988 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:47:02.954370  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.954692  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.954720  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.954955  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:47:02.955155  762988 start.go:128] duration metric: took 24.09175682s to createHost
	I0920 18:47:02.955178  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.957364  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.957683  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.957707  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.957847  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.958049  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.958195  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.958370  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.958531  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.958721  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.958745  762988 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:47:03.055624  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858023.014434190
	
	I0920 18:47:03.055646  762988 fix.go:216] guest clock: 1726858023.014434190
	I0920 18:47:03.055653  762988 fix.go:229] Guest: 2024-09-20 18:47:03.01443419 +0000 UTC Remote: 2024-09-20 18:47:02.955165997 +0000 UTC m=+24.204227210 (delta=59.268193ms)
	I0920 18:47:03.055673  762988 fix.go:200] guest clock delta is within tolerance: 59.268193ms
	I0920 18:47:03.055678  762988 start.go:83] releasing machines lock for "ha-525790", held for 24.192365497s
	I0920 18:47:03.055696  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:03.056004  762988 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:47:03.058619  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.058967  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:03.059002  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.059176  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:03.059645  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:03.059786  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:03.059913  762988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:47:03.059955  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:03.060006  762988 ssh_runner.go:195] Run: cat /version.json
	I0920 18:47:03.060036  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:03.062498  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.062744  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.062833  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:03.062884  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.063020  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:03.063078  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:03.063109  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.063168  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:03.063236  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:03.063307  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:03.063405  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:03.063423  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:03.063542  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:03.063665  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:03.136335  762988 ssh_runner.go:195] Run: systemctl --version
	I0920 18:47:03.170125  762988 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:47:03.331364  762988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:47:03.337153  762988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:47:03.337233  762988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:47:03.353297  762988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:47:03.353324  762988 start.go:495] detecting cgroup driver to use...
	I0920 18:47:03.353385  762988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:47:03.369816  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:47:03.383774  762988 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:47:03.383838  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:47:03.397487  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:47:03.411243  762988 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:47:03.523455  762988 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:47:03.671823  762988 docker.go:233] disabling docker service ...
	I0920 18:47:03.671918  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:47:03.687139  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:47:03.700569  762988 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:47:03.840971  762988 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:47:03.962385  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:47:03.976750  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:47:03.995774  762988 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:47:03.995835  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.007019  762988 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:47:04.007124  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.018001  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.028509  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.039860  762988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:47:04.050769  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.061191  762988 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.077692  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.088041  762988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:47:04.097754  762988 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:47:04.097807  762988 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:47:04.110739  762988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:47:04.120636  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:47:04.245299  762988 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:47:04.341170  762988 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:47:04.341258  762988 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:47:04.345975  762988 start.go:563] Will wait 60s for crictl version
	I0920 18:47:04.346047  762988 ssh_runner.go:195] Run: which crictl
	I0920 18:47:04.349925  762988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:47:04.390230  762988 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:47:04.390341  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:47:04.418445  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:47:04.447740  762988 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:47:04.448969  762988 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:47:04.451547  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:04.451921  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:04.451950  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:04.452148  762988 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:47:04.456198  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:47:04.470013  762988 kubeadm.go:883] updating cluster {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:47:04.470186  762988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:47:04.470265  762988 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:47:04.502535  762988 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:47:04.502609  762988 ssh_runner.go:195] Run: which lz4
	I0920 18:47:04.506581  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0920 18:47:04.506673  762988 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:47:04.510814  762988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:47:04.510861  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:47:05.839638  762988 crio.go:462] duration metric: took 1.33298536s to copy over tarball
	I0920 18:47:05.839723  762988 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:47:07.786766  762988 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.947011448s)
	I0920 18:47:07.786795  762988 crio.go:469] duration metric: took 1.947128446s to extract the tarball
	I0920 18:47:07.786805  762988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:47:07.822913  762988 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:47:07.866552  762988 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:47:07.866583  762988 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:47:07.866592  762988 kubeadm.go:934] updating node { 192.168.39.149 8443 v1.31.1 crio true true} ...
	I0920 18:47:07.866704  762988 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:47:07.866781  762988 ssh_runner.go:195] Run: crio config
	I0920 18:47:07.918540  762988 cni.go:84] Creating CNI manager for ""
	I0920 18:47:07.918563  762988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 18:47:07.918573  762988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:47:07.918597  762988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-525790 NodeName:ha-525790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:47:07.918730  762988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-525790"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:47:07.918753  762988 kube-vip.go:115] generating kube-vip config ...
	I0920 18:47:07.918798  762988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:47:07.936288  762988 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:47:07.936429  762988 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:47:07.936497  762988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:47:07.945867  762988 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:47:07.945940  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 18:47:07.955191  762988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 18:47:07.971064  762988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:47:07.986880  762988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:47:08.002662  762988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0920 18:47:08.019579  762988 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:47:08.023552  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:47:08.035218  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:47:08.170218  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:47:08.187527  762988 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.149
	I0920 18:47:08.187547  762988 certs.go:194] generating shared ca certs ...
	I0920 18:47:08.187568  762988 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.187793  762988 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:47:08.187883  762988 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:47:08.187899  762988 certs.go:256] generating profile certs ...
	I0920 18:47:08.187973  762988 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:47:08.187993  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt with IP's: []
	I0920 18:47:08.272186  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt ...
	I0920 18:47:08.272216  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt: {Name:mk7bd0f4b5267ef296fffaf22c63ade5f9317aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.272387  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key ...
	I0920 18:47:08.272398  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key: {Name:mk8397cc62a5b5fd0095d7257df95debaa0a3c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.272479  762988 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.39888826
	I0920 18:47:08.272493  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.39888826 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.254]
	I0920 18:47:08.448019  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.39888826 ...
	I0920 18:47:08.448049  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.39888826: {Name:mk46ff6887950fec6d616a29dc6bce205118977d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.448240  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.39888826 ...
	I0920 18:47:08.448262  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.39888826: {Name:mk9b06f9440d087fb58cd5f31657e72732704a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.448360  762988 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.39888826 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:47:08.448487  762988 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.39888826 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:47:08.448573  762988 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:47:08.448592  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt with IP's: []
	I0920 18:47:08.547781  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt ...
	I0920 18:47:08.547811  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt: {Name:mk5f440c35d9494faae93b7f24e431b15c93d038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.547991  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key ...
	I0920 18:47:08.548027  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key: {Name:mk1af5a674ecd36547ebff165e719d66a8eaf2a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.548154  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:47:08.548179  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:47:08.548198  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:47:08.548217  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:47:08.548234  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:47:08.548251  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:47:08.548270  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:47:08.548288  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:47:08.548368  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:47:08.548419  762988 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:47:08.548433  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:47:08.548468  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:47:08.548498  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:47:08.548526  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:47:08.548582  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:47:08.548616  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:47:08.548636  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:47:08.548655  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:08.549274  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:47:08.575606  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:47:08.599030  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:47:08.622271  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:47:08.645192  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:47:08.668189  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:47:08.691174  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:47:08.714332  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:47:08.737751  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:47:08.760383  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:47:08.783502  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:47:08.806863  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:47:08.822981  762988 ssh_runner.go:195] Run: openssl version
	I0920 18:47:08.828850  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:47:08.839624  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:47:08.844261  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:47:08.844324  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:47:08.850299  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:47:08.860928  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:47:08.871606  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:47:08.876264  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:47:08.876328  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:47:08.882105  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:47:08.892622  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:47:08.903139  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:08.907653  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:08.907717  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:08.913362  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:47:08.923853  762988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:47:08.927915  762988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:47:08.927964  762988 kubeadm.go:392] StartCluster: {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:47:08.928033  762988 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:47:08.928074  762988 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:47:08.975658  762988 cri.go:89] found id: ""
	I0920 18:47:08.975731  762988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:47:08.987853  762988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:47:09.001997  762988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:47:09.015239  762988 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:47:09.015263  762988 kubeadm.go:157] found existing configuration files:
	
	I0920 18:47:09.015328  762988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:47:09.024322  762988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:47:09.024391  762988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:47:09.033789  762988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:47:09.042729  762988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:47:09.042806  762988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:47:09.052389  762988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:47:09.061397  762988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:47:09.061452  762988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:47:09.070628  762988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:47:09.079481  762988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:47:09.079574  762988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:47:09.088812  762988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:47:09.197025  762988 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:47:09.197195  762988 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:47:09.302732  762988 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:47:09.302875  762988 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:47:09.303013  762988 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:47:09.313100  762988 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:47:09.315042  762988 out.go:235]   - Generating certificates and keys ...
	I0920 18:47:09.315126  762988 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:47:09.315194  762988 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:47:09.561066  762988 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:47:09.701075  762988 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:47:09.963251  762988 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:47:10.218874  762988 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:47:10.374815  762988 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:47:10.375019  762988 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-525790 localhost] and IPs [192.168.39.149 127.0.0.1 ::1]
	I0920 18:47:10.536783  762988 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:47:10.536945  762988 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-525790 localhost] and IPs [192.168.39.149 127.0.0.1 ::1]
	I0920 18:47:10.653048  762988 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:47:10.817540  762988 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:47:11.052072  762988 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:47:11.052166  762988 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:47:11.275604  762988 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:47:11.340320  762988 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:47:11.606513  762988 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:47:11.722778  762988 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:47:11.939356  762988 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:47:11.939850  762988 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:47:11.942972  762988 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:47:11.945229  762988 out.go:235]   - Booting up control plane ...
	I0920 18:47:11.945356  762988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:47:11.945485  762988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:47:11.945574  762988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:47:11.961277  762988 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:47:11.967235  762988 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:47:11.967294  762988 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:47:12.103452  762988 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:47:12.103652  762988 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:47:12.605055  762988 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.510324ms
	I0920 18:47:12.605178  762988 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:47:18.584157  762988 kubeadm.go:310] [api-check] The API server is healthy after 5.978671976s
	I0920 18:47:18.596695  762988 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:47:19.113972  762988 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:47:19.144976  762988 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:47:19.145190  762988 kubeadm.go:310] [mark-control-plane] Marking the node ha-525790 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:47:19.157610  762988 kubeadm.go:310] [bootstrap-token] Using token: qd32pn.8pqkvbtlqp80l6sb
	I0920 18:47:19.159113  762988 out.go:235]   - Configuring RBAC rules ...
	I0920 18:47:19.159238  762988 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:47:19.164190  762988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:47:19.177203  762988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:47:19.185189  762988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:47:19.189876  762988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:47:19.193529  762988 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:47:19.311685  762988 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:47:19.754352  762988 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:47:20.310973  762988 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:47:20.311943  762988 kubeadm.go:310] 
	I0920 18:47:20.312030  762988 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:47:20.312039  762988 kubeadm.go:310] 
	I0920 18:47:20.312140  762988 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:47:20.312149  762988 kubeadm.go:310] 
	I0920 18:47:20.312178  762988 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:47:20.312290  762988 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:47:20.312369  762988 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:47:20.312380  762988 kubeadm.go:310] 
	I0920 18:47:20.312430  762988 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:47:20.312442  762988 kubeadm.go:310] 
	I0920 18:47:20.312481  762988 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:47:20.312487  762988 kubeadm.go:310] 
	I0920 18:47:20.312536  762988 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:47:20.312615  762988 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:47:20.312715  762988 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:47:20.312735  762988 kubeadm.go:310] 
	I0920 18:47:20.312856  762988 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:47:20.312961  762988 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:47:20.312973  762988 kubeadm.go:310] 
	I0920 18:47:20.313079  762988 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qd32pn.8pqkvbtlqp80l6sb \
	I0920 18:47:20.313228  762988 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d \
	I0920 18:47:20.313262  762988 kubeadm.go:310] 	--control-plane 
	I0920 18:47:20.313271  762988 kubeadm.go:310] 
	I0920 18:47:20.313383  762988 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:47:20.313397  762988 kubeadm.go:310] 
	I0920 18:47:20.313513  762988 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qd32pn.8pqkvbtlqp80l6sb \
	I0920 18:47:20.313639  762988 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d 
	I0920 18:47:20.314670  762988 kubeadm.go:310] W0920 18:47:09.152542     827 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:47:20.315023  762988 kubeadm.go:310] W0920 18:47:09.153465     827 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:47:20.315172  762988 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:47:20.315210  762988 cni.go:84] Creating CNI manager for ""
	I0920 18:47:20.315225  762988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 18:47:20.317188  762988 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 18:47:20.318757  762988 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 18:47:20.324392  762988 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 18:47:20.324411  762988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 18:47:20.347801  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 18:47:20.735995  762988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:47:20.736093  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:20.736105  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-525790 minikube.k8s.io/updated_at=2024_09_20T18_47_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=ha-525790 minikube.k8s.io/primary=true
	I0920 18:47:20.761909  762988 ops.go:34] apiserver oom_adj: -16
	I0920 18:47:20.876678  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:21.377092  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:21.876896  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:22.377010  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:22.877069  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:23.377474  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:23.877640  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:24.377768  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:24.504008  762988 kubeadm.go:1113] duration metric: took 3.76800228s to wait for elevateKubeSystemPrivileges
	I0920 18:47:24.504045  762988 kubeadm.go:394] duration metric: took 15.576084363s to StartCluster
	I0920 18:47:24.504070  762988 settings.go:142] acquiring lock: {Name:mk0bd1e421bf437575c076c52c1ff2f74497a1ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:24.504282  762988 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:47:24.505108  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/kubeconfig: {Name:mk275c54cf52b0ccdc22fcaa39c7b9c31092c648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:24.505342  762988 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:47:24.505366  762988 start.go:241] waiting for startup goroutines ...
	I0920 18:47:24.505366  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:47:24.505382  762988 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:47:24.505468  762988 addons.go:69] Setting storage-provisioner=true in profile "ha-525790"
	I0920 18:47:24.505483  762988 addons.go:69] Setting default-storageclass=true in profile "ha-525790"
	I0920 18:47:24.505492  762988 addons.go:234] Setting addon storage-provisioner=true in "ha-525790"
	I0920 18:47:24.505509  762988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-525790"
	I0920 18:47:24.505524  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:47:24.505571  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:24.505974  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.506023  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.506141  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.506249  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.522502  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41623
	I0920 18:47:24.522534  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38073
	I0920 18:47:24.522991  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.523040  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.523523  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.523546  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.523666  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.523684  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.523961  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.524077  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.524239  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:24.524629  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.524696  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.526413  762988 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:47:24.526810  762988 kapi.go:59] client config for ha-525790: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key", CAFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 18:47:24.527471  762988 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 18:47:24.527819  762988 addons.go:234] Setting addon default-storageclass=true in "ha-525790"
	I0920 18:47:24.527875  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:47:24.528265  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.528313  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.542871  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0920 18:47:24.543236  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0920 18:47:24.543494  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.543587  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.544071  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.544093  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.544229  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.544255  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.544432  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.544641  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.544640  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:24.545205  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.545253  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.546391  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:24.548710  762988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:47:24.550144  762988 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:47:24.550165  762988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:47:24.550186  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:24.553367  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:24.553828  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:24.553854  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:24.553998  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:24.554216  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:24.554440  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:24.554622  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:24.561549  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37321
	I0920 18:47:24.561966  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.562494  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.562519  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.562876  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.563072  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:24.564587  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:24.564814  762988 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:47:24.564831  762988 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:47:24.564849  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:24.567687  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:24.568171  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:24.568193  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:24.568319  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:24.568510  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:24.568703  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:24.568857  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:24.656392  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:47:24.815217  762988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:47:24.828379  762988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:47:25.253619  762988 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 18:47:25.464741  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.464767  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.464846  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.464869  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.465054  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.465071  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.465081  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.465089  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.465214  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.465241  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.465251  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.465258  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.465320  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.465336  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.465344  762988 main.go:141] libmachine: (ha-525790) DBG | Closing plugin on server side
	I0920 18:47:25.465497  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.465514  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.465592  762988 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 18:47:25.465620  762988 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 18:47:25.465728  762988 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0920 18:47:25.465739  762988 round_trippers.go:469] Request Headers:
	I0920 18:47:25.465759  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:47:25.465768  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:47:25.475780  762988 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0920 18:47:25.476328  762988 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0920 18:47:25.476346  762988 round_trippers.go:469] Request Headers:
	I0920 18:47:25.476353  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:47:25.476356  762988 round_trippers.go:473]     Content-Type: application/json
	I0920 18:47:25.476359  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:47:25.478464  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:47:25.478670  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.478686  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.479015  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.479056  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.479019  762988 main.go:141] libmachine: (ha-525790) DBG | Closing plugin on server side
	I0920 18:47:25.480685  762988 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 18:47:25.481832  762988 addons.go:510] duration metric: took 976.454814ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 18:47:25.481877  762988 start.go:246] waiting for cluster config update ...
	I0920 18:47:25.481891  762988 start.go:255] writing updated cluster config ...
	I0920 18:47:25.483450  762988 out.go:201] 
	I0920 18:47:25.484717  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:25.484795  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:47:25.486329  762988 out.go:177] * Starting "ha-525790-m02" control-plane node in "ha-525790" cluster
	I0920 18:47:25.487492  762988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:47:25.487516  762988 cache.go:56] Caching tarball of preloaded images
	I0920 18:47:25.487633  762988 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:47:25.487647  762988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:47:25.487721  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:47:25.487913  762988 start.go:360] acquireMachinesLock for ha-525790-m02: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:47:25.487963  762988 start.go:364] duration metric: took 29.413µs to acquireMachinesLock for "ha-525790-m02"
	I0920 18:47:25.487982  762988 start.go:93] Provisioning new machine with config: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:47:25.488070  762988 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0920 18:47:25.489602  762988 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:47:25.489710  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:25.489745  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:25.504741  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0920 18:47:25.505176  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:25.505735  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:25.505756  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:25.506114  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:25.506304  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetMachineName
	I0920 18:47:25.506440  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:25.506586  762988 start.go:159] libmachine.API.Create for "ha-525790" (driver="kvm2")
	I0920 18:47:25.506620  762988 client.go:168] LocalClient.Create starting
	I0920 18:47:25.506658  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:47:25.506697  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:47:25.506717  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:47:25.506786  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:47:25.506825  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:47:25.506864  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:47:25.506891  762988 main.go:141] libmachine: Running pre-create checks...
	I0920 18:47:25.506903  762988 main.go:141] libmachine: (ha-525790-m02) Calling .PreCreateCheck
	I0920 18:47:25.507083  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetConfigRaw
	I0920 18:47:25.507514  762988 main.go:141] libmachine: Creating machine...
	I0920 18:47:25.507530  762988 main.go:141] libmachine: (ha-525790-m02) Calling .Create
	I0920 18:47:25.507681  762988 main.go:141] libmachine: (ha-525790-m02) Creating KVM machine...
	I0920 18:47:25.508920  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found existing default KVM network
	I0920 18:47:25.509048  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found existing private KVM network mk-ha-525790
	I0920 18:47:25.509185  762988 main.go:141] libmachine: (ha-525790-m02) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02 ...
	I0920 18:47:25.509201  762988 main.go:141] libmachine: (ha-525790-m02) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:47:25.509310  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:25.509191  763373 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:47:25.509384  762988 main.go:141] libmachine: (ha-525790-m02) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:47:25.810758  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:25.810588  763373 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa...
	I0920 18:47:26.052474  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:26.052313  763373 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/ha-525790-m02.rawdisk...
	I0920 18:47:26.052509  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Writing magic tar header
	I0920 18:47:26.052523  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Writing SSH key tar header
	I0920 18:47:26.052535  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:26.052440  763373 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02 ...
	I0920 18:47:26.052629  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02
	I0920 18:47:26.052676  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:47:26.052691  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02 (perms=drwx------)
	I0920 18:47:26.052705  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:47:26.052718  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:47:26.052738  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:47:26.052758  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:47:26.052768  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:47:26.052788  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:47:26.052797  762988 main.go:141] libmachine: (ha-525790-m02) Creating domain...
	I0920 18:47:26.052815  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:47:26.052826  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:47:26.052837  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:47:26.052849  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home
	I0920 18:47:26.052861  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Skipping /home - not owner
	I0920 18:47:26.053670  762988 main.go:141] libmachine: (ha-525790-m02) define libvirt domain using xml: 
	I0920 18:47:26.053692  762988 main.go:141] libmachine: (ha-525790-m02) <domain type='kvm'>
	I0920 18:47:26.053711  762988 main.go:141] libmachine: (ha-525790-m02)   <name>ha-525790-m02</name>
	I0920 18:47:26.053719  762988 main.go:141] libmachine: (ha-525790-m02)   <memory unit='MiB'>2200</memory>
	I0920 18:47:26.053731  762988 main.go:141] libmachine: (ha-525790-m02)   <vcpu>2</vcpu>
	I0920 18:47:26.053741  762988 main.go:141] libmachine: (ha-525790-m02)   <features>
	I0920 18:47:26.053752  762988 main.go:141] libmachine: (ha-525790-m02)     <acpi/>
	I0920 18:47:26.053761  762988 main.go:141] libmachine: (ha-525790-m02)     <apic/>
	I0920 18:47:26.053790  762988 main.go:141] libmachine: (ha-525790-m02)     <pae/>
	I0920 18:47:26.053810  762988 main.go:141] libmachine: (ha-525790-m02)     
	I0920 18:47:26.053820  762988 main.go:141] libmachine: (ha-525790-m02)   </features>
	I0920 18:47:26.053828  762988 main.go:141] libmachine: (ha-525790-m02)   <cpu mode='host-passthrough'>
	I0920 18:47:26.053841  762988 main.go:141] libmachine: (ha-525790-m02)   
	I0920 18:47:26.053848  762988 main.go:141] libmachine: (ha-525790-m02)   </cpu>
	I0920 18:47:26.053859  762988 main.go:141] libmachine: (ha-525790-m02)   <os>
	I0920 18:47:26.053883  762988 main.go:141] libmachine: (ha-525790-m02)     <type>hvm</type>
	I0920 18:47:26.053908  762988 main.go:141] libmachine: (ha-525790-m02)     <boot dev='cdrom'/>
	I0920 18:47:26.053933  762988 main.go:141] libmachine: (ha-525790-m02)     <boot dev='hd'/>
	I0920 18:47:26.053946  762988 main.go:141] libmachine: (ha-525790-m02)     <bootmenu enable='no'/>
	I0920 18:47:26.053958  762988 main.go:141] libmachine: (ha-525790-m02)   </os>
	I0920 18:47:26.053975  762988 main.go:141] libmachine: (ha-525790-m02)   <devices>
	I0920 18:47:26.053988  762988 main.go:141] libmachine: (ha-525790-m02)     <disk type='file' device='cdrom'>
	I0920 18:47:26.053999  762988 main.go:141] libmachine: (ha-525790-m02)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/boot2docker.iso'/>
	I0920 18:47:26.054008  762988 main.go:141] libmachine: (ha-525790-m02)       <target dev='hdc' bus='scsi'/>
	I0920 18:47:26.054017  762988 main.go:141] libmachine: (ha-525790-m02)       <readonly/>
	I0920 18:47:26.054026  762988 main.go:141] libmachine: (ha-525790-m02)     </disk>
	I0920 18:47:26.054036  762988 main.go:141] libmachine: (ha-525790-m02)     <disk type='file' device='disk'>
	I0920 18:47:26.054048  762988 main.go:141] libmachine: (ha-525790-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:47:26.054067  762988 main.go:141] libmachine: (ha-525790-m02)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/ha-525790-m02.rawdisk'/>
	I0920 18:47:26.054080  762988 main.go:141] libmachine: (ha-525790-m02)       <target dev='hda' bus='virtio'/>
	I0920 18:47:26.054092  762988 main.go:141] libmachine: (ha-525790-m02)     </disk>
	I0920 18:47:26.054102  762988 main.go:141] libmachine: (ha-525790-m02)     <interface type='network'>
	I0920 18:47:26.054113  762988 main.go:141] libmachine: (ha-525790-m02)       <source network='mk-ha-525790'/>
	I0920 18:47:26.054121  762988 main.go:141] libmachine: (ha-525790-m02)       <model type='virtio'/>
	I0920 18:47:26.054138  762988 main.go:141] libmachine: (ha-525790-m02)     </interface>
	I0920 18:47:26.054148  762988 main.go:141] libmachine: (ha-525790-m02)     <interface type='network'>
	I0920 18:47:26.054159  762988 main.go:141] libmachine: (ha-525790-m02)       <source network='default'/>
	I0920 18:47:26.054170  762988 main.go:141] libmachine: (ha-525790-m02)       <model type='virtio'/>
	I0920 18:47:26.054182  762988 main.go:141] libmachine: (ha-525790-m02)     </interface>
	I0920 18:47:26.054192  762988 main.go:141] libmachine: (ha-525790-m02)     <serial type='pty'>
	I0920 18:47:26.054202  762988 main.go:141] libmachine: (ha-525790-m02)       <target port='0'/>
	I0920 18:47:26.054210  762988 main.go:141] libmachine: (ha-525790-m02)     </serial>
	I0920 18:47:26.054226  762988 main.go:141] libmachine: (ha-525790-m02)     <console type='pty'>
	I0920 18:47:26.054239  762988 main.go:141] libmachine: (ha-525790-m02)       <target type='serial' port='0'/>
	I0920 18:47:26.054250  762988 main.go:141] libmachine: (ha-525790-m02)     </console>
	I0920 18:47:26.054260  762988 main.go:141] libmachine: (ha-525790-m02)     <rng model='virtio'>
	I0920 18:47:26.054269  762988 main.go:141] libmachine: (ha-525790-m02)       <backend model='random'>/dev/random</backend>
	I0920 18:47:26.054275  762988 main.go:141] libmachine: (ha-525790-m02)     </rng>
	I0920 18:47:26.054282  762988 main.go:141] libmachine: (ha-525790-m02)     
	I0920 18:47:26.054290  762988 main.go:141] libmachine: (ha-525790-m02)     
	I0920 18:47:26.054302  762988 main.go:141] libmachine: (ha-525790-m02)   </devices>
	I0920 18:47:26.054314  762988 main.go:141] libmachine: (ha-525790-m02) </domain>
	I0920 18:47:26.054327  762988 main.go:141] libmachine: (ha-525790-m02) 
	I0920 18:47:26.060630  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:c9:44:90 in network default
	I0920 18:47:26.061118  762988 main.go:141] libmachine: (ha-525790-m02) Ensuring networks are active...
	I0920 18:47:26.061136  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:26.061831  762988 main.go:141] libmachine: (ha-525790-m02) Ensuring network default is active
	I0920 18:47:26.062169  762988 main.go:141] libmachine: (ha-525790-m02) Ensuring network mk-ha-525790 is active
	I0920 18:47:26.062475  762988 main.go:141] libmachine: (ha-525790-m02) Getting domain xml...
	I0920 18:47:26.063135  762988 main.go:141] libmachine: (ha-525790-m02) Creating domain...
	I0920 18:47:27.281978  762988 main.go:141] libmachine: (ha-525790-m02) Waiting to get IP...
	I0920 18:47:27.282784  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:27.283239  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:27.283266  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:27.283218  763373 retry.go:31] will retry after 308.177361ms: waiting for machine to come up
	I0920 18:47:27.592590  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:27.593066  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:27.593096  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:27.593029  763373 retry.go:31] will retry after 320.236434ms: waiting for machine to come up
	I0920 18:47:27.914511  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:27.914888  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:27.914914  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:27.914871  763373 retry.go:31] will retry after 467.681075ms: waiting for machine to come up
	I0920 18:47:28.384709  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:28.385145  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:28.385176  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:28.385093  763373 retry.go:31] will retry after 475.809922ms: waiting for machine to come up
	I0920 18:47:28.862677  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:28.863104  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:28.863166  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:28.863088  763373 retry.go:31] will retry after 752.437443ms: waiting for machine to come up
	I0920 18:47:29.616869  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:29.617208  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:29.617236  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:29.617153  763373 retry.go:31] will retry after 885.836184ms: waiting for machine to come up
	I0920 18:47:30.505116  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:30.505517  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:30.505574  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:30.505468  763373 retry.go:31] will retry after 963.771364ms: waiting for machine to come up
	I0920 18:47:31.470533  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:31.470960  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:31.470987  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:31.470922  763373 retry.go:31] will retry after 1.119790188s: waiting for machine to come up
	I0920 18:47:32.592108  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:32.592570  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:32.592610  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:32.592526  763373 retry.go:31] will retry after 1.532725085s: waiting for machine to come up
	I0920 18:47:34.127220  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:34.127626  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:34.127659  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:34.127555  763373 retry.go:31] will retry after 1.862816679s: waiting for machine to come up
	I0920 18:47:35.991806  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:35.992125  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:35.992154  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:35.992071  763373 retry.go:31] will retry after 2.15065243s: waiting for machine to come up
	I0920 18:47:38.145444  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:38.145875  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:38.145907  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:38.145806  763373 retry.go:31] will retry after 3.304630599s: waiting for machine to come up
	I0920 18:47:41.451734  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:41.452111  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:41.452140  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:41.452065  763373 retry.go:31] will retry after 3.579286099s: waiting for machine to come up
	I0920 18:47:45.035810  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:45.036306  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:45.036331  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:45.036255  763373 retry.go:31] will retry after 4.166411475s: waiting for machine to come up
	I0920 18:47:49.204465  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.205113  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has current primary IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.205136  762988 main.go:141] libmachine: (ha-525790-m02) Found IP for machine: 192.168.39.246
	I0920 18:47:49.205146  762988 main.go:141] libmachine: (ha-525790-m02) Reserving static IP address...
	I0920 18:47:49.205644  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find host DHCP lease matching {name: "ha-525790-m02", mac: "52:54:00:da:aa:a2", ip: "192.168.39.246"} in network mk-ha-525790
	I0920 18:47:49.279479  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Getting to WaitForSSH function...
	I0920 18:47:49.279570  762988 main.go:141] libmachine: (ha-525790-m02) Reserved static IP address: 192.168.39.246
	I0920 18:47:49.279586  762988 main.go:141] libmachine: (ha-525790-m02) Waiting for SSH to be available...
	I0920 18:47:49.282091  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.282697  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.282724  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.282939  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Using SSH client type: external
	I0920 18:47:49.282962  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa (-rw-------)
	I0920 18:47:49.283009  762988 main.go:141] libmachine: (ha-525790-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:47:49.283028  762988 main.go:141] libmachine: (ha-525790-m02) DBG | About to run SSH command:
	I0920 18:47:49.283043  762988 main.go:141] libmachine: (ha-525790-m02) DBG | exit 0
	I0920 18:47:49.406686  762988 main.go:141] libmachine: (ha-525790-m02) DBG | SSH cmd err, output: <nil>: 
	I0920 18:47:49.406894  762988 main.go:141] libmachine: (ha-525790-m02) KVM machine creation complete!
	I0920 18:47:49.407253  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetConfigRaw
	I0920 18:47:49.407921  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:49.408101  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:49.408280  762988 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:47:49.408299  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetState
	I0920 18:47:49.409531  762988 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:47:49.409549  762988 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:47:49.409556  762988 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:47:49.409565  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.411929  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.412327  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.412357  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.412422  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:49.412599  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.412798  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.412930  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:49.413134  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:49.413339  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:49.413349  762988 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:47:49.514173  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:47:49.514209  762988 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:47:49.514222  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.516963  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.517420  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.517450  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.517591  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:49.517799  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.517980  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.518113  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:49.518250  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:49.518433  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:49.518443  762988 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:47:49.619473  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:47:49.619576  762988 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:47:49.619587  762988 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:47:49.619599  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetMachineName
	I0920 18:47:49.619832  762988 buildroot.go:166] provisioning hostname "ha-525790-m02"
	I0920 18:47:49.619860  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetMachineName
	I0920 18:47:49.620048  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.622596  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.622960  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.622986  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.623162  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:49.623347  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.623512  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.623614  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:49.623826  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:49.624053  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:49.624072  762988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790-m02 && echo "ha-525790-m02" | sudo tee /etc/hostname
	I0920 18:47:49.741686  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790-m02
	
	I0920 18:47:49.741719  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.744162  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.744537  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.744566  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.744764  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:49.744977  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.745123  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.745246  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:49.745415  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:49.745636  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:49.745654  762988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:47:49.861819  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:47:49.861869  762988 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:47:49.861890  762988 buildroot.go:174] setting up certificates
	I0920 18:47:49.861903  762988 provision.go:84] configureAuth start
	I0920 18:47:49.861915  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetMachineName
	I0920 18:47:49.862237  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 18:47:49.864787  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.865160  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.865188  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.865324  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.867360  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.867673  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.867699  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.867911  762988 provision.go:143] copyHostCerts
	I0920 18:47:49.867938  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:47:49.867981  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:47:49.867990  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:47:49.868053  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:47:49.868121  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:47:49.868140  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:47:49.868144  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:47:49.868168  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:47:49.868256  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:47:49.868279  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:47:49.868285  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:47:49.868309  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:47:49.868354  762988 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790-m02 san=[127.0.0.1 192.168.39.246 ha-525790-m02 localhost minikube]
	I0920 18:47:50.026326  762988 provision.go:177] copyRemoteCerts
	I0920 18:47:50.026387  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:47:50.026413  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.029067  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.029469  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.029558  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.029689  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.029875  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.030065  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.030209  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:47:50.113429  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:47:50.113512  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:47:50.138381  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:47:50.138457  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:47:50.162199  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:47:50.162285  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:47:50.185945  762988 provision.go:87] duration metric: took 324.027275ms to configureAuth
	I0920 18:47:50.185972  762988 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:47:50.186148  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:50.186225  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.190079  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.190492  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.190513  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.190710  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.190964  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.191145  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.191294  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.191424  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:50.191588  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:50.191602  762988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:47:50.416583  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:47:50.416624  762988 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:47:50.416631  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetURL
	I0920 18:47:50.417912  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Using libvirt version 6000000
	I0920 18:47:50.420017  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.420424  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.420454  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.420641  762988 main.go:141] libmachine: Docker is up and running!
	I0920 18:47:50.420664  762988 main.go:141] libmachine: Reticulating splines...
	I0920 18:47:50.420672  762988 client.go:171] duration metric: took 24.914041264s to LocalClient.Create
	I0920 18:47:50.420699  762988 start.go:167] duration metric: took 24.914113541s to libmachine.API.Create "ha-525790"
	I0920 18:47:50.420712  762988 start.go:293] postStartSetup for "ha-525790-m02" (driver="kvm2")
	I0920 18:47:50.420726  762988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:47:50.420744  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.420995  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:47:50.421029  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.423161  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.423420  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.423447  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.423594  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.423797  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.423953  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.424081  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:47:50.505401  762988 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:47:50.510220  762988 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:47:50.510246  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:47:50.510332  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:47:50.510417  762988 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:47:50.510429  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:47:50.510527  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:47:50.520201  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:47:50.544692  762988 start.go:296] duration metric: took 123.962986ms for postStartSetup
	I0920 18:47:50.544747  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetConfigRaw
	I0920 18:47:50.545353  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 18:47:50.548132  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.548490  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.548517  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.548850  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:47:50.549085  762988 start.go:128] duration metric: took 25.06099769s to createHost
	I0920 18:47:50.549116  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.551581  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.551997  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.552025  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.552177  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.552377  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.552543  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.552681  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.552832  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:50.553008  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:50.553021  762988 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:47:50.655701  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858070.610915334
	
	I0920 18:47:50.655725  762988 fix.go:216] guest clock: 1726858070.610915334
	I0920 18:47:50.655734  762988 fix.go:229] Guest: 2024-09-20 18:47:50.610915334 +0000 UTC Remote: 2024-09-20 18:47:50.549100081 +0000 UTC m=+71.798161303 (delta=61.815253ms)
	I0920 18:47:50.655756  762988 fix.go:200] guest clock delta is within tolerance: 61.815253ms
	I0920 18:47:50.655762  762988 start.go:83] releasing machines lock for "ha-525790-m02", held for 25.167790601s
	I0920 18:47:50.655785  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.656107  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 18:47:50.658651  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.659046  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.659073  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.661685  762988 out.go:177] * Found network options:
	I0920 18:47:50.663168  762988 out.go:177]   - NO_PROXY=192.168.39.149
	W0920 18:47:50.664561  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:47:50.664590  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.665196  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.665478  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.665602  762988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:47:50.665662  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	W0920 18:47:50.665708  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:47:50.665796  762988 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:47:50.665818  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.668764  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.668800  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.669194  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.669220  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.669246  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.669261  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.669369  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.669464  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.669573  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.669655  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.669713  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.669774  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.669844  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:47:50.669922  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:47:50.909505  762988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:47:50.915357  762988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:47:50.915439  762988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:47:50.932184  762988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:47:50.932206  762988 start.go:495] detecting cgroup driver to use...
	I0920 18:47:50.932266  762988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:47:50.948362  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:47:50.962800  762988 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:47:50.962889  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:47:50.976893  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:47:50.992982  762988 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:47:51.118282  762988 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:47:51.256995  762988 docker.go:233] disabling docker service ...
	I0920 18:47:51.257080  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:47:51.271445  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:47:51.284437  762988 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:47:51.427984  762988 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:47:51.540460  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:47:51.554587  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:47:51.573609  762988 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:47:51.573684  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.583854  762988 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:47:51.583919  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.594247  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.604465  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.614547  762988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:47:51.624622  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.634811  762988 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.651778  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.661817  762988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:47:51.670752  762988 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:47:51.670816  762988 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:47:51.683631  762988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:47:51.692558  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:47:51.804846  762988 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:47:51.893367  762988 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:47:51.893448  762988 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:47:51.898101  762988 start.go:563] Will wait 60s for crictl version
	I0920 18:47:51.898148  762988 ssh_runner.go:195] Run: which crictl
	I0920 18:47:51.901983  762988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:47:51.945514  762988 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:47:51.945611  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:47:51.973141  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:47:52.003666  762988 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:47:52.005189  762988 out.go:177]   - env NO_PROXY=192.168.39.149
	I0920 18:47:52.006445  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 18:47:52.008892  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:52.009199  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:52.009224  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:52.009410  762988 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:47:52.013674  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:47:52.025912  762988 mustload.go:65] Loading cluster: ha-525790
	I0920 18:47:52.026090  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:52.026337  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:52.026371  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:52.041555  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
	I0920 18:47:52.042164  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:52.042654  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:52.042674  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:52.043081  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:52.043293  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:52.044999  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:47:52.045304  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:52.045340  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:52.060489  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44511
	I0920 18:47:52.060988  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:52.061514  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:52.061548  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:52.061872  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:52.062063  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:52.062249  762988 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.246
	I0920 18:47:52.062265  762988 certs.go:194] generating shared ca certs ...
	I0920 18:47:52.062284  762988 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:52.062496  762988 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:47:52.062557  762988 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:47:52.062572  762988 certs.go:256] generating profile certs ...
	I0920 18:47:52.062674  762988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:47:52.062712  762988 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.b06313b5
	I0920 18:47:52.062734  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.b06313b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.246 192.168.39.254]
	I0920 18:47:52.367330  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.b06313b5 ...
	I0920 18:47:52.367365  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.b06313b5: {Name:mka76a58a80092d1cbec495d718f7bdea16bb00c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:52.367534  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.b06313b5 ...
	I0920 18:47:52.367547  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.b06313b5: {Name:mkf8231ebc436432da2597e17792d752485bca58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:52.367622  762988 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.b06313b5 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:47:52.367755  762988 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.b06313b5 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:47:52.367883  762988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:47:52.367899  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:47:52.367912  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:47:52.367926  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:47:52.367938  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:47:52.367950  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:47:52.367961  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:47:52.367973  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:47:52.367983  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:47:52.368035  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:47:52.368066  762988 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:47:52.368075  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:47:52.368096  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:47:52.368117  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:47:52.368141  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:47:52.368184  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:47:52.368212  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:52.368225  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:47:52.368237  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:47:52.368269  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:52.371227  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:52.371645  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:52.371674  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:52.371783  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:52.371999  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:52.372168  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:52.372324  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:52.443286  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 18:47:52.448837  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 18:47:52.460311  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 18:47:52.464490  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0920 18:47:52.475983  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 18:47:52.480213  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 18:47:52.494615  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 18:47:52.499007  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0920 18:47:52.508955  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 18:47:52.516124  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 18:47:52.526659  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 18:47:52.530903  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 18:47:52.541062  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:47:52.569451  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:47:52.592930  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:47:52.616256  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:47:52.639385  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 18:47:52.662394  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:47:52.686445  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:47:52.710153  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:47:52.734191  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:47:52.757258  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:47:52.780903  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:47:52.804939  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 18:47:52.821362  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0920 18:47:52.837317  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 18:47:52.853233  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0920 18:47:52.869254  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 18:47:52.885005  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 18:47:52.900806  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 18:47:52.917027  762988 ssh_runner.go:195] Run: openssl version
	I0920 18:47:52.922702  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:47:52.933000  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:52.937464  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:52.937523  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:52.943170  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:47:52.953509  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:47:52.964038  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:47:52.968718  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:47:52.968771  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:47:52.974378  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:47:52.984752  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:47:52.994888  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:47:52.999311  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:47:52.999370  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:47:53.005001  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:47:53.015691  762988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:47:53.019635  762988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:47:53.019692  762988 kubeadm.go:934] updating node {m02 192.168.39.246 8443 v1.31.1 crio true true} ...
	I0920 18:47:53.019793  762988 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:47:53.019822  762988 kube-vip.go:115] generating kube-vip config ...
	I0920 18:47:53.019860  762988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:47:53.036153  762988 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:47:53.036237  762988 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:47:53.036305  762988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:47:53.046004  762988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 18:47:53.046062  762988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 18:47:53.055936  762988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 18:47:53.055979  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:47:53.056005  762988 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0920 18:47:53.056053  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:47:53.056076  762988 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0920 18:47:53.060289  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 18:47:53.060315  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 18:47:53.789944  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:47:53.790047  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:47:53.795156  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 18:47:53.795193  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 18:47:53.889636  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:47:53.918466  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:47:53.918585  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:47:53.930311  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 18:47:53.930362  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 18:47:54.378013  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 18:47:54.388156  762988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:47:54.404650  762988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:47:54.420945  762988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:47:54.437522  762988 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:47:54.441369  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:47:54.453920  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:47:54.571913  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:47:54.589386  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:47:54.589919  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:54.589985  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:54.605308  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39785
	I0920 18:47:54.605924  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:54.606447  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:54.606470  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:54.606870  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:54.607082  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:54.607245  762988 start.go:317] joinCluster: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:47:54.607339  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 18:47:54.607355  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:54.610593  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:54.611156  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:54.611186  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:54.611363  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:54.611536  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:54.611703  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:54.611875  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:54.765700  762988 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:47:54.765757  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fgipyq.kw78xdqejinofgh1 --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-525790-m02 --control-plane --apiserver-advertise-address=192.168.39.246 --apiserver-bind-port=8443"
	I0920 18:48:15.991126  762988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fgipyq.kw78xdqejinofgh1 --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-525790-m02 --control-plane --apiserver-advertise-address=192.168.39.246 --apiserver-bind-port=8443": (21.225342383s)
	I0920 18:48:15.991161  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 18:48:16.566701  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-525790-m02 minikube.k8s.io/updated_at=2024_09_20T18_48_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=ha-525790 minikube.k8s.io/primary=false
	I0920 18:48:16.719509  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-525790-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 18:48:16.847244  762988 start.go:319] duration metric: took 22.239995563s to joinCluster
	I0920 18:48:16.847322  762988 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:48:16.847615  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:48:16.849000  762988 out.go:177] * Verifying Kubernetes components...
	I0920 18:48:16.850372  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:48:17.092103  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:48:17.120788  762988 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:48:17.121173  762988 kapi.go:59] client config for ha-525790: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key", CAFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 18:48:17.121271  762988 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.149:8443
	I0920 18:48:17.121564  762988 node_ready.go:35] waiting up to 6m0s for node "ha-525790-m02" to be "Ready" ...
	I0920 18:48:17.121729  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:17.121741  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:17.121752  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:17.121758  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:17.132247  762988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 18:48:17.622473  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:17.622504  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:17.622516  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:17.622523  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:17.625769  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:18.122399  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:18.122419  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:18.122427  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:18.122432  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:18.136165  762988 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 18:48:18.622000  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:18.622027  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:18.622037  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:18.622041  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:18.626792  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:19.122652  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:19.122677  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:19.122685  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:19.122691  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:19.125929  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:19.126379  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:19.622318  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:19.622339  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:19.622347  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:19.622351  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:19.625821  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:20.121842  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:20.121865  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:20.121874  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:20.121879  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:20.126973  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:48:20.622440  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:20.622464  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:20.622472  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:20.622476  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:20.625669  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:21.122479  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:21.122503  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:21.122514  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:21.122518  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:21.126309  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:21.127070  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:21.622431  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:21.622455  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:21.622464  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:21.622467  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:21.625353  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:22.122551  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:22.122577  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:22.122588  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:22.122594  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:22.130464  762988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 18:48:22.622444  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:22.622465  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:22.622473  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:22.622476  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:22.624966  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:23.121881  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:23.121906  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:23.121915  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:23.121918  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:23.126058  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:23.621933  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:23.621958  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:23.621967  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:23.621971  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:23.625609  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:23.626079  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:24.121954  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:24.121979  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:24.121986  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:24.121990  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:24.126296  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:24.622206  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:24.622229  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:24.622237  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:24.622241  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:24.625435  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:25.121906  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:25.121929  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:25.121937  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:25.121943  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:25.125410  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:25.622826  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:25.622865  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:25.622883  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:25.622888  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:25.626033  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:25.626689  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:26.121997  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:26.122029  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:26.122041  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:26.122047  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:26.126269  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:26.622175  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:26.622199  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:26.622207  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:26.622216  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:26.625403  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:27.122340  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:27.122371  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:27.122386  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:27.122391  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:27.126523  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:27.622670  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:27.622696  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:27.622708  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:27.622714  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:27.625864  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:28.121813  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:28.121839  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:28.121856  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:28.121861  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:28.127100  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:48:28.127893  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:28.622194  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:28.622218  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:28.622226  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:28.622231  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:28.625675  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:29.122510  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:29.122544  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:29.122556  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:29.122561  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:29.126584  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:29.622212  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:29.622230  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:29.622238  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:29.622242  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:29.625683  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:30.121899  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:30.121923  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:30.121931  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:30.121938  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:30.126500  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:30.622237  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:30.622262  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:30.622273  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:30.622282  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:30.625998  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:30.626739  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:31.122135  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:31.122162  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:31.122175  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:31.122180  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:31.126468  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:31.622529  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:31.622556  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:31.622568  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:31.622574  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:31.625581  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:32.122718  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:32.122743  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:32.122753  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:32.122758  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:32.126212  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:32.622048  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:32.622078  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:32.622090  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:32.622097  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:32.625566  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:33.122722  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:33.122748  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:33.122759  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:33.122766  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:33.125690  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:33.126429  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:33.622805  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:33.622839  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:33.622867  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:33.622874  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:33.626126  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:34.122562  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:34.122584  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.122593  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.122596  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.125490  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.126097  762988 node_ready.go:49] node "ha-525790-m02" has status "Ready":"True"
	I0920 18:48:34.126121  762988 node_ready.go:38] duration metric: took 17.004511153s for node "ha-525790-m02" to be "Ready" ...
	I0920 18:48:34.126132  762988 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:48:34.126214  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:34.126225  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.126235  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.126244  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.130332  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:34.136520  762988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.136636  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nfnkj
	I0920 18:48:34.136651  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.136659  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.136662  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.139356  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.140019  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.140035  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.140044  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.140050  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.142804  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.143520  762988 pod_ready.go:93] pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.143541  762988 pod_ready.go:82] duration metric: took 6.997099ms for pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.143552  762988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.143630  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rpcds
	I0920 18:48:34.143640  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.143650  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.143656  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.146528  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.147267  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.147282  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.147291  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.147298  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.149448  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.149863  762988 pod_ready.go:93] pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.149880  762988 pod_ready.go:82] duration metric: took 6.32048ms for pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.149890  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.149955  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790
	I0920 18:48:34.149964  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.149974  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.149982  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.152307  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.152827  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.152841  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.152848  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.152852  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.155039  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.155552  762988 pod_ready.go:93] pod "etcd-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.155568  762988 pod_ready.go:82] duration metric: took 5.670104ms for pod "etcd-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.155578  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.155636  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790-m02
	I0920 18:48:34.155646  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.155655  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.155660  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.157775  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.158230  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:34.158244  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.158252  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.158256  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.160455  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.161045  762988 pod_ready.go:93] pod "etcd-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.161062  762988 pod_ready.go:82] duration metric: took 5.476839ms for pod "etcd-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.161078  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.323482  762988 request.go:632] Waited for 162.335052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790
	I0920 18:48:34.323561  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790
	I0920 18:48:34.323567  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.323577  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.323596  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.327021  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:34.523234  762988 request.go:632] Waited for 195.376284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.523291  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.523297  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.523304  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.523308  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.526504  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:34.527263  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.527282  762988 pod_ready.go:82] duration metric: took 366.197667ms for pod "kube-apiserver-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.527291  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.722970  762988 request.go:632] Waited for 195.600109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m02
	I0920 18:48:34.723047  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m02
	I0920 18:48:34.723055  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.723066  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.723077  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.727681  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:34.922800  762988 request.go:632] Waited for 194.329492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:34.922877  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:34.922883  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.922890  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.922895  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.925710  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.926612  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.926641  762988 pod_ready.go:82] duration metric: took 399.342285ms for pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.926656  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.122660  762988 request.go:632] Waited for 195.882629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790
	I0920 18:48:35.122740  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790
	I0920 18:48:35.122749  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.122759  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.122770  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.126705  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:35.322726  762988 request.go:632] Waited for 195.293792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:35.322782  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:35.322787  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.322795  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.322800  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.326393  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:35.326918  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:35.326946  762988 pod_ready.go:82] duration metric: took 400.278191ms for pod "kube-controller-manager-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.326961  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.523401  762988 request.go:632] Waited for 196.343619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m02
	I0920 18:48:35.523471  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m02
	I0920 18:48:35.523481  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.523489  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.523496  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.526931  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:35.722974  762988 request.go:632] Waited for 195.371903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:35.723051  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:35.723062  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.723074  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.723083  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.726332  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:35.726861  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:35.726891  762988 pod_ready.go:82] duration metric: took 399.92136ms for pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.726906  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-958jz" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.922820  762988 request.go:632] Waited for 195.83508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-958jz
	I0920 18:48:35.922930  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-958jz
	I0920 18:48:35.922936  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.922947  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.922954  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.926053  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.123110  762988 request.go:632] Waited for 196.38428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:36.123185  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:36.123190  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.123198  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.123202  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.126954  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.127418  762988 pod_ready.go:93] pod "kube-proxy-958jz" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:36.127437  762988 pod_ready.go:82] duration metric: took 400.524478ms for pod "kube-proxy-958jz" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.127449  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sspfs" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.323527  762988 request.go:632] Waited for 195.98167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sspfs
	I0920 18:48:36.323598  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sspfs
	I0920 18:48:36.323607  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.323616  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.323622  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.327351  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.523422  762988 request.go:632] Waited for 195.381458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:36.523486  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:36.523492  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.523500  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.523509  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.526668  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.527360  762988 pod_ready.go:93] pod "kube-proxy-sspfs" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:36.527381  762988 pod_ready.go:82] duration metric: took 399.9242ms for pod "kube-proxy-sspfs" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.527392  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.723613  762988 request.go:632] Waited for 196.121297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790
	I0920 18:48:36.723676  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790
	I0920 18:48:36.723681  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.723690  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.723695  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.726896  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.922949  762988 request.go:632] Waited for 195.378354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:36.923034  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:36.923046  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.923061  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.923071  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.926320  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.926935  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:36.926956  762988 pod_ready.go:82] duration metric: took 399.558392ms for pod "kube-scheduler-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.926967  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:37.122901  762988 request.go:632] Waited for 195.82569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m02
	I0920 18:48:37.122982  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m02
	I0920 18:48:37.122988  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.122996  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.123003  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.126347  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:37.323372  762988 request.go:632] Waited for 196.406319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:37.323437  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:37.323442  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.323450  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.323457  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.326709  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:37.327455  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:37.327476  762988 pod_ready.go:82] duration metric: took 400.502746ms for pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:37.327489  762988 pod_ready.go:39] duration metric: took 3.201339533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:48:37.327504  762988 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:48:37.327555  762988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:48:37.343797  762988 api_server.go:72] duration metric: took 20.496433387s to wait for apiserver process to appear ...
	I0920 18:48:37.343829  762988 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:48:37.343854  762988 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0920 18:48:37.348107  762988 api_server.go:279] https://192.168.39.149:8443/healthz returned 200:
	ok
	I0920 18:48:37.348169  762988 round_trippers.go:463] GET https://192.168.39.149:8443/version
	I0920 18:48:37.348176  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.348184  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.348191  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.349126  762988 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 18:48:37.349250  762988 api_server.go:141] control plane version: v1.31.1
	I0920 18:48:37.349267  762988 api_server.go:131] duration metric: took 5.431776ms to wait for apiserver health ...
	I0920 18:48:37.349274  762988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:48:37.522627  762988 request.go:632] Waited for 173.275089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:37.522715  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:37.522723  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.522731  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.522738  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.528234  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:48:37.534123  762988 system_pods.go:59] 17 kube-system pods found
	I0920 18:48:37.534155  762988 system_pods.go:61] "coredns-7c65d6cfc9-nfnkj" [7994989d-6bfa-4d25-b7b7-662d2e6c742c] Running
	I0920 18:48:37.534161  762988 system_pods.go:61] "coredns-7c65d6cfc9-rpcds" [7db58219-7147-4a45-b233-ef3c698566ef] Running
	I0920 18:48:37.534171  762988 system_pods.go:61] "etcd-ha-525790" [f23cd40e-ac8d-451b-9bf9-2ef5d62ef4b6] Running
	I0920 18:48:37.534176  762988 system_pods.go:61] "etcd-ha-525790-m02" [5a29103e-6da3-40d1-be3c-58fdc0f28b54] Running
	I0920 18:48:37.534181  762988 system_pods.go:61] "kindnet-8glgp" [f462782e-1ff6-410a-8359-de3360d380b0] Running
	I0920 18:48:37.534186  762988 system_pods.go:61] "kindnet-9qbm6" [87e8ae18-a561-48ec-9835-27446b6917d3] Running
	I0920 18:48:37.534190  762988 system_pods.go:61] "kube-apiserver-ha-525790" [0e3563fd-5185-4dc6-8d9b-a7d954b96c8d] Running
	I0920 18:48:37.534195  762988 system_pods.go:61] "kube-apiserver-ha-525790-m02" [b3966e2e-ce3d-4916-b73c-0d80cd1793f0] Running
	I0920 18:48:37.534202  762988 system_pods.go:61] "kube-controller-manager-ha-525790" [1d695853-6a7e-487d-a52b-9aceb1fc9ff3] Running
	I0920 18:48:37.534210  762988 system_pods.go:61] "kube-controller-manager-ha-525790-m02" [090c1833-3800-4e13-b9a7-c03680f3d55d] Running
	I0920 18:48:37.534213  762988 system_pods.go:61] "kube-proxy-958jz" [46603403-eb82-4f15-a1da-da62194a072f] Running
	I0920 18:48:37.534216  762988 system_pods.go:61] "kube-proxy-sspfs" [15203515-fc45-4624-b97e-8ec247f01e2d] Running
	I0920 18:48:37.534221  762988 system_pods.go:61] "kube-scheduler-ha-525790" [8cb7e23e-c1d1-4753-9758-b17ef9fd08d7] Running
	I0920 18:48:37.534224  762988 system_pods.go:61] "kube-scheduler-ha-525790-m02" [dc9a5561-5d41-445d-a0ba-de3b2405f821] Running
	I0920 18:48:37.534228  762988 system_pods.go:61] "kube-vip-ha-525790" [0b318b1e-7a85-4c8c-8a5a-2fee226d7702] Running
	I0920 18:48:37.534231  762988 system_pods.go:61] "kube-vip-ha-525790-m02" [f2316231-5c1d-4bf2-ae62-5a4202b5818b] Running
	I0920 18:48:37.534234  762988 system_pods.go:61] "storage-provisioner" [ea6bf34f-c1f7-4216-a61f-be30846c991b] Running
	I0920 18:48:37.534241  762988 system_pods.go:74] duration metric: took 184.960329ms to wait for pod list to return data ...
	I0920 18:48:37.534252  762988 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:48:37.722639  762988 request.go:632] Waited for 188.265166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:48:37.722711  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:48:37.722717  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.722726  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.722730  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.726193  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:37.726449  762988 default_sa.go:45] found service account: "default"
	I0920 18:48:37.726469  762988 default_sa.go:55] duration metric: took 192.210022ms for default service account to be created ...
	I0920 18:48:37.726480  762988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:48:37.922955  762988 request.go:632] Waited for 196.382479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:37.923039  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:37.923050  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.923065  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.923072  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.927492  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:37.932712  762988 system_pods.go:86] 17 kube-system pods found
	I0920 18:48:37.932740  762988 system_pods.go:89] "coredns-7c65d6cfc9-nfnkj" [7994989d-6bfa-4d25-b7b7-662d2e6c742c] Running
	I0920 18:48:37.932746  762988 system_pods.go:89] "coredns-7c65d6cfc9-rpcds" [7db58219-7147-4a45-b233-ef3c698566ef] Running
	I0920 18:48:37.932750  762988 system_pods.go:89] "etcd-ha-525790" [f23cd40e-ac8d-451b-9bf9-2ef5d62ef4b6] Running
	I0920 18:48:37.932754  762988 system_pods.go:89] "etcd-ha-525790-m02" [5a29103e-6da3-40d1-be3c-58fdc0f28b54] Running
	I0920 18:48:37.932757  762988 system_pods.go:89] "kindnet-8glgp" [f462782e-1ff6-410a-8359-de3360d380b0] Running
	I0920 18:48:37.932761  762988 system_pods.go:89] "kindnet-9qbm6" [87e8ae18-a561-48ec-9835-27446b6917d3] Running
	I0920 18:48:37.932765  762988 system_pods.go:89] "kube-apiserver-ha-525790" [0e3563fd-5185-4dc6-8d9b-a7d954b96c8d] Running
	I0920 18:48:37.932769  762988 system_pods.go:89] "kube-apiserver-ha-525790-m02" [b3966e2e-ce3d-4916-b73c-0d80cd1793f0] Running
	I0920 18:48:37.932774  762988 system_pods.go:89] "kube-controller-manager-ha-525790" [1d695853-6a7e-487d-a52b-9aceb1fc9ff3] Running
	I0920 18:48:37.932779  762988 system_pods.go:89] "kube-controller-manager-ha-525790-m02" [090c1833-3800-4e13-b9a7-c03680f3d55d] Running
	I0920 18:48:37.932786  762988 system_pods.go:89] "kube-proxy-958jz" [46603403-eb82-4f15-a1da-da62194a072f] Running
	I0920 18:48:37.932789  762988 system_pods.go:89] "kube-proxy-sspfs" [15203515-fc45-4624-b97e-8ec247f01e2d] Running
	I0920 18:48:37.932792  762988 system_pods.go:89] "kube-scheduler-ha-525790" [8cb7e23e-c1d1-4753-9758-b17ef9fd08d7] Running
	I0920 18:48:37.932797  762988 system_pods.go:89] "kube-scheduler-ha-525790-m02" [dc9a5561-5d41-445d-a0ba-de3b2405f821] Running
	I0920 18:48:37.932800  762988 system_pods.go:89] "kube-vip-ha-525790" [0b318b1e-7a85-4c8c-8a5a-2fee226d7702] Running
	I0920 18:48:37.932805  762988 system_pods.go:89] "kube-vip-ha-525790-m02" [f2316231-5c1d-4bf2-ae62-5a4202b5818b] Running
	I0920 18:48:37.932808  762988 system_pods.go:89] "storage-provisioner" [ea6bf34f-c1f7-4216-a61f-be30846c991b] Running
	I0920 18:48:37.932815  762988 system_pods.go:126] duration metric: took 206.326319ms to wait for k8s-apps to be running ...
	I0920 18:48:37.932824  762988 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:48:37.932877  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:48:37.949333  762988 system_svc.go:56] duration metric: took 16.495186ms WaitForService to wait for kubelet
	I0920 18:48:37.949367  762988 kubeadm.go:582] duration metric: took 21.102009969s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:48:37.949386  762988 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:48:38.122741  762988 request.go:632] Waited for 173.263132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes
	I0920 18:48:38.122838  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes
	I0920 18:48:38.122859  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:38.122875  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:38.122883  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:38.126598  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:38.127344  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:48:38.127374  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:48:38.127387  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:48:38.127390  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:48:38.127395  762988 node_conditions.go:105] duration metric: took 178.00469ms to run NodePressure ...
	I0920 18:48:38.127407  762988 start.go:241] waiting for startup goroutines ...
	I0920 18:48:38.127433  762988 start.go:255] writing updated cluster config ...
	I0920 18:48:38.129743  762988 out.go:201] 
	I0920 18:48:38.131559  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:48:38.131667  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:48:38.133474  762988 out.go:177] * Starting "ha-525790-m03" control-plane node in "ha-525790" cluster
	I0920 18:48:38.134688  762988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:48:38.134716  762988 cache.go:56] Caching tarball of preloaded images
	I0920 18:48:38.134840  762988 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:48:38.134876  762988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:48:38.135002  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:48:38.135229  762988 start.go:360] acquireMachinesLock for ha-525790-m03: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:48:38.135283  762988 start.go:364] duration metric: took 31.132µs to acquireMachinesLock for "ha-525790-m03"
	I0920 18:48:38.135310  762988 start.go:93] Provisioning new machine with config: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:48:38.135483  762988 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0920 18:48:38.137252  762988 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:48:38.137351  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:48:38.137389  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:48:38.152991  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40037
	I0920 18:48:38.153403  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:48:38.153921  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:48:38.153950  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:48:38.154269  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:48:38.154503  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetMachineName
	I0920 18:48:38.154635  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:48:38.154794  762988 start.go:159] libmachine.API.Create for "ha-525790" (driver="kvm2")
	I0920 18:48:38.154827  762988 client.go:168] LocalClient.Create starting
	I0920 18:48:38.154887  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:48:38.154928  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:48:38.154951  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:48:38.155015  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:48:38.155046  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:48:38.155064  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:48:38.155089  762988 main.go:141] libmachine: Running pre-create checks...
	I0920 18:48:38.155100  762988 main.go:141] libmachine: (ha-525790-m03) Calling .PreCreateCheck
	I0920 18:48:38.155260  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetConfigRaw
	I0920 18:48:38.155601  762988 main.go:141] libmachine: Creating machine...
	I0920 18:48:38.155615  762988 main.go:141] libmachine: (ha-525790-m03) Calling .Create
	I0920 18:48:38.155731  762988 main.go:141] libmachine: (ha-525790-m03) Creating KVM machine...
	I0920 18:48:38.156940  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found existing default KVM network
	I0920 18:48:38.157092  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found existing private KVM network mk-ha-525790
	I0920 18:48:38.157240  762988 main.go:141] libmachine: (ha-525790-m03) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03 ...
	I0920 18:48:38.157269  762988 main.go:141] libmachine: (ha-525790-m03) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:48:38.157310  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:38.157208  763765 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:48:38.157402  762988 main.go:141] libmachine: (ha-525790-m03) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:48:38.440404  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:38.440283  763765 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa...
	I0920 18:48:38.491702  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:38.491581  763765 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/ha-525790-m03.rawdisk...
	I0920 18:48:38.491754  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Writing magic tar header
	I0920 18:48:38.491768  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Writing SSH key tar header
	I0920 18:48:38.491779  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:38.491723  763765 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03 ...
	I0920 18:48:38.491856  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03
	I0920 18:48:38.491883  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03 (perms=drwx------)
	I0920 18:48:38.491895  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:48:38.491911  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:48:38.491922  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:48:38.491935  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:48:38.491947  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:48:38.491958  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:48:38.491971  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:48:38.491983  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:48:38.491992  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:48:38.492002  762988 main.go:141] libmachine: (ha-525790-m03) Creating domain...
	I0920 18:48:38.492014  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:48:38.492025  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home
	I0920 18:48:38.492039  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Skipping /home - not owner
	I0920 18:48:38.492931  762988 main.go:141] libmachine: (ha-525790-m03) define libvirt domain using xml: 
	I0920 18:48:38.492957  762988 main.go:141] libmachine: (ha-525790-m03) <domain type='kvm'>
	I0920 18:48:38.492966  762988 main.go:141] libmachine: (ha-525790-m03)   <name>ha-525790-m03</name>
	I0920 18:48:38.492979  762988 main.go:141] libmachine: (ha-525790-m03)   <memory unit='MiB'>2200</memory>
	I0920 18:48:38.492990  762988 main.go:141] libmachine: (ha-525790-m03)   <vcpu>2</vcpu>
	I0920 18:48:38.492996  762988 main.go:141] libmachine: (ha-525790-m03)   <features>
	I0920 18:48:38.493008  762988 main.go:141] libmachine: (ha-525790-m03)     <acpi/>
	I0920 18:48:38.493014  762988 main.go:141] libmachine: (ha-525790-m03)     <apic/>
	I0920 18:48:38.493024  762988 main.go:141] libmachine: (ha-525790-m03)     <pae/>
	I0920 18:48:38.493031  762988 main.go:141] libmachine: (ha-525790-m03)     
	I0920 18:48:38.493036  762988 main.go:141] libmachine: (ha-525790-m03)   </features>
	I0920 18:48:38.493042  762988 main.go:141] libmachine: (ha-525790-m03)   <cpu mode='host-passthrough'>
	I0920 18:48:38.493047  762988 main.go:141] libmachine: (ha-525790-m03)   
	I0920 18:48:38.493051  762988 main.go:141] libmachine: (ha-525790-m03)   </cpu>
	I0920 18:48:38.493058  762988 main.go:141] libmachine: (ha-525790-m03)   <os>
	I0920 18:48:38.493071  762988 main.go:141] libmachine: (ha-525790-m03)     <type>hvm</type>
	I0920 18:48:38.493106  762988 main.go:141] libmachine: (ha-525790-m03)     <boot dev='cdrom'/>
	I0920 18:48:38.493129  762988 main.go:141] libmachine: (ha-525790-m03)     <boot dev='hd'/>
	I0920 18:48:38.493143  762988 main.go:141] libmachine: (ha-525790-m03)     <bootmenu enable='no'/>
	I0920 18:48:38.493157  762988 main.go:141] libmachine: (ha-525790-m03)   </os>
	I0920 18:48:38.493169  762988 main.go:141] libmachine: (ha-525790-m03)   <devices>
	I0920 18:48:38.493180  762988 main.go:141] libmachine: (ha-525790-m03)     <disk type='file' device='cdrom'>
	I0920 18:48:38.493199  762988 main.go:141] libmachine: (ha-525790-m03)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/boot2docker.iso'/>
	I0920 18:48:38.493210  762988 main.go:141] libmachine: (ha-525790-m03)       <target dev='hdc' bus='scsi'/>
	I0920 18:48:38.493219  762988 main.go:141] libmachine: (ha-525790-m03)       <readonly/>
	I0920 18:48:38.493233  762988 main.go:141] libmachine: (ha-525790-m03)     </disk>
	I0920 18:48:38.493245  762988 main.go:141] libmachine: (ha-525790-m03)     <disk type='file' device='disk'>
	I0920 18:48:38.493262  762988 main.go:141] libmachine: (ha-525790-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:48:38.493279  762988 main.go:141] libmachine: (ha-525790-m03)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/ha-525790-m03.rawdisk'/>
	I0920 18:48:38.493292  762988 main.go:141] libmachine: (ha-525790-m03)       <target dev='hda' bus='virtio'/>
	I0920 18:48:38.493309  762988 main.go:141] libmachine: (ha-525790-m03)     </disk>
	I0920 18:48:38.493325  762988 main.go:141] libmachine: (ha-525790-m03)     <interface type='network'>
	I0920 18:48:38.493333  762988 main.go:141] libmachine: (ha-525790-m03)       <source network='mk-ha-525790'/>
	I0920 18:48:38.493341  762988 main.go:141] libmachine: (ha-525790-m03)       <model type='virtio'/>
	I0920 18:48:38.493348  762988 main.go:141] libmachine: (ha-525790-m03)     </interface>
	I0920 18:48:38.493354  762988 main.go:141] libmachine: (ha-525790-m03)     <interface type='network'>
	I0920 18:48:38.493361  762988 main.go:141] libmachine: (ha-525790-m03)       <source network='default'/>
	I0920 18:48:38.493368  762988 main.go:141] libmachine: (ha-525790-m03)       <model type='virtio'/>
	I0920 18:48:38.493373  762988 main.go:141] libmachine: (ha-525790-m03)     </interface>
	I0920 18:48:38.493379  762988 main.go:141] libmachine: (ha-525790-m03)     <serial type='pty'>
	I0920 18:48:38.493384  762988 main.go:141] libmachine: (ha-525790-m03)       <target port='0'/>
	I0920 18:48:38.493391  762988 main.go:141] libmachine: (ha-525790-m03)     </serial>
	I0920 18:48:38.493400  762988 main.go:141] libmachine: (ha-525790-m03)     <console type='pty'>
	I0920 18:48:38.493407  762988 main.go:141] libmachine: (ha-525790-m03)       <target type='serial' port='0'/>
	I0920 18:48:38.493412  762988 main.go:141] libmachine: (ha-525790-m03)     </console>
	I0920 18:48:38.493418  762988 main.go:141] libmachine: (ha-525790-m03)     <rng model='virtio'>
	I0920 18:48:38.493427  762988 main.go:141] libmachine: (ha-525790-m03)       <backend model='random'>/dev/random</backend>
	I0920 18:48:38.493440  762988 main.go:141] libmachine: (ha-525790-m03)     </rng>
	I0920 18:48:38.493450  762988 main.go:141] libmachine: (ha-525790-m03)     
	I0920 18:48:38.493460  762988 main.go:141] libmachine: (ha-525790-m03)     
	I0920 18:48:38.493468  762988 main.go:141] libmachine: (ha-525790-m03)   </devices>
	I0920 18:48:38.493474  762988 main.go:141] libmachine: (ha-525790-m03) </domain>
	I0920 18:48:38.493482  762988 main.go:141] libmachine: (ha-525790-m03) 
	I0920 18:48:38.499885  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:a8:31:1e in network default
	I0920 18:48:38.500386  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:38.500420  762988 main.go:141] libmachine: (ha-525790-m03) Ensuring networks are active...
	I0920 18:48:38.501164  762988 main.go:141] libmachine: (ha-525790-m03) Ensuring network default is active
	I0920 18:48:38.501467  762988 main.go:141] libmachine: (ha-525790-m03) Ensuring network mk-ha-525790 is active
	I0920 18:48:38.501827  762988 main.go:141] libmachine: (ha-525790-m03) Getting domain xml...
	I0920 18:48:38.502449  762988 main.go:141] libmachine: (ha-525790-m03) Creating domain...
	I0920 18:48:39.736443  762988 main.go:141] libmachine: (ha-525790-m03) Waiting to get IP...
	I0920 18:48:39.737400  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:39.737834  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:39.737861  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:39.737801  763765 retry.go:31] will retry after 302.940885ms: waiting for machine to come up
	I0920 18:48:40.042424  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:40.043046  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:40.043071  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:40.042996  763765 retry.go:31] will retry after 350.440595ms: waiting for machine to come up
	I0920 18:48:40.395674  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:40.396221  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:40.396257  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:40.396163  763765 retry.go:31] will retry after 469.287011ms: waiting for machine to come up
	I0920 18:48:40.866499  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:40.866994  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:40.867018  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:40.866942  763765 retry.go:31] will retry after 590.023713ms: waiting for machine to come up
	I0920 18:48:41.458823  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:41.459324  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:41.459354  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:41.459270  763765 retry.go:31] will retry after 548.369209ms: waiting for machine to come up
	I0920 18:48:42.009043  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:42.009525  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:42.009554  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:42.009477  763765 retry.go:31] will retry after 690.597661ms: waiting for machine to come up
	I0920 18:48:42.701450  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:42.701900  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:42.701929  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:42.701849  763765 retry.go:31] will retry after 975.285461ms: waiting for machine to come up
	I0920 18:48:43.678426  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:43.678873  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:43.678903  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:43.678807  763765 retry.go:31] will retry after 921.744359ms: waiting for machine to come up
	I0920 18:48:44.601892  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:44.602442  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:44.602473  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:44.602393  763765 retry.go:31] will retry after 1.426461906s: waiting for machine to come up
	I0920 18:48:46.031141  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:46.031614  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:46.031647  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:46.031561  763765 retry.go:31] will retry after 1.995117324s: waiting for machine to come up
	I0920 18:48:48.028189  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:48.028849  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:48.028882  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:48.028801  763765 retry.go:31] will retry after 2.180775421s: waiting for machine to come up
	I0920 18:48:50.212117  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:50.212617  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:50.212648  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:50.212544  763765 retry.go:31] will retry after 2.921621074s: waiting for machine to come up
	I0920 18:48:53.136087  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:53.136635  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:53.136663  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:53.136590  763765 retry.go:31] will retry after 2.977541046s: waiting for machine to come up
	I0920 18:48:56.115874  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:56.116235  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:56.116257  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:56.116195  763765 retry.go:31] will retry after 3.995277529s: waiting for machine to come up
	I0920 18:49:00.113196  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.113677  762988 main.go:141] libmachine: (ha-525790-m03) Found IP for machine: 192.168.39.105
	I0920 18:49:00.113703  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has current primary IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.113712  762988 main.go:141] libmachine: (ha-525790-m03) Reserving static IP address...
	I0920 18:49:00.114010  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find host DHCP lease matching {name: "ha-525790-m03", mac: "52:54:00:c8:21:86", ip: "192.168.39.105"} in network mk-ha-525790
	I0920 18:49:00.188644  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Getting to WaitForSSH function...
	I0920 18:49:00.188711  762988 main.go:141] libmachine: (ha-525790-m03) Reserved static IP address: 192.168.39.105
	I0920 18:49:00.188740  762988 main.go:141] libmachine: (ha-525790-m03) Waiting for SSH to be available...
	I0920 18:49:00.191758  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.192256  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.192284  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.192476  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Using SSH client type: external
	I0920 18:49:00.192503  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa (-rw-------)
	I0920 18:49:00.192535  762988 main.go:141] libmachine: (ha-525790-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:49:00.192565  762988 main.go:141] libmachine: (ha-525790-m03) DBG | About to run SSH command:
	I0920 18:49:00.192608  762988 main.go:141] libmachine: (ha-525790-m03) DBG | exit 0
	I0920 18:49:00.319098  762988 main.go:141] libmachine: (ha-525790-m03) DBG | SSH cmd err, output: <nil>: 
	I0920 18:49:00.319375  762988 main.go:141] libmachine: (ha-525790-m03) KVM machine creation complete!
	I0920 18:49:00.319707  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetConfigRaw
	I0920 18:49:00.320287  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:00.320484  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:00.320624  762988 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:49:00.320639  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetState
	I0920 18:49:00.321930  762988 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:49:00.321949  762988 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:49:00.321957  762988 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:49:00.321965  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.324623  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.325172  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.325194  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.325388  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:00.325587  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.325771  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.325922  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:00.326093  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:00.326319  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:00.326331  762988 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:49:00.430187  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:49:00.430218  762988 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:49:00.430229  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.433076  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.433420  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.433448  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.433596  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:00.433812  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.433990  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.434135  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:00.434275  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:00.434454  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:00.434466  762988 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:49:00.539754  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:49:00.539823  762988 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:49:00.539832  762988 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:49:00.539852  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetMachineName
	I0920 18:49:00.540100  762988 buildroot.go:166] provisioning hostname "ha-525790-m03"
	I0920 18:49:00.540117  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetMachineName
	I0920 18:49:00.540338  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.543112  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.543620  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.543653  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.543781  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:00.543968  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.544100  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.544196  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:00.544321  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:00.544478  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:00.544494  762988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790-m03 && echo "ha-525790-m03" | sudo tee /etc/hostname
	I0920 18:49:00.661965  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790-m03
	
	I0920 18:49:00.661996  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.665201  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.665573  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.665605  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.665825  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:00.666001  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.666174  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.666276  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:00.666436  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:00.666619  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:00.666635  762988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:49:00.779769  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:49:00.779801  762988 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:49:00.779819  762988 buildroot.go:174] setting up certificates
	I0920 18:49:00.779830  762988 provision.go:84] configureAuth start
	I0920 18:49:00.779838  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetMachineName
	I0920 18:49:00.780148  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetIP
	I0920 18:49:00.783087  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.783547  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.783572  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.783793  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.786303  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.786669  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.786697  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.786832  762988 provision.go:143] copyHostCerts
	I0920 18:49:00.786879  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:49:00.786917  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:49:00.786928  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:49:00.787003  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:49:00.787095  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:49:00.787123  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:49:00.787129  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:49:00.787169  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:49:00.787241  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:49:00.787266  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:49:00.787273  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:49:00.787297  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:49:00.787351  762988 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790-m03 san=[127.0.0.1 192.168.39.105 ha-525790-m03 localhost minikube]
	I0920 18:49:01.027593  762988 provision.go:177] copyRemoteCerts
	I0920 18:49:01.027666  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:49:01.027706  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.030883  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.031239  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.031269  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.031374  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.031584  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.031757  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.031880  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:49:01.112943  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:49:01.113017  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:49:01.137911  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:49:01.138012  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:49:01.162029  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:49:01.162099  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:49:01.186294  762988 provision.go:87] duration metric: took 406.448312ms to configureAuth
	I0920 18:49:01.186330  762988 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:49:01.186601  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:49:01.186679  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.189283  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.189565  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.189599  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.189778  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.190004  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.190151  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.190284  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.190437  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:01.190651  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:01.190666  762988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:49:01.415670  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:49:01.415702  762988 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:49:01.415710  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetURL
	I0920 18:49:01.417024  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Using libvirt version 6000000
	I0920 18:49:01.419032  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.419386  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.419434  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.419554  762988 main.go:141] libmachine: Docker is up and running!
	I0920 18:49:01.419580  762988 main.go:141] libmachine: Reticulating splines...
	I0920 18:49:01.419588  762988 client.go:171] duration metric: took 23.264752776s to LocalClient.Create
	I0920 18:49:01.419627  762988 start.go:167] duration metric: took 23.26482906s to libmachine.API.Create "ha-525790"
	I0920 18:49:01.419643  762988 start.go:293] postStartSetup for "ha-525790-m03" (driver="kvm2")
	I0920 18:49:01.419656  762988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:49:01.419679  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.419934  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:49:01.419967  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.422004  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.422361  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.422390  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.422501  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.422709  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.422888  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.423046  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:49:01.505266  762988 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:49:01.509857  762988 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:49:01.509888  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:49:01.509961  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:49:01.510060  762988 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:49:01.510077  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:49:01.510189  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:49:01.520278  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:49:01.544737  762988 start.go:296] duration metric: took 125.077677ms for postStartSetup
	I0920 18:49:01.544786  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetConfigRaw
	I0920 18:49:01.545420  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetIP
	I0920 18:49:01.548112  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.548447  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.548464  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.548782  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:49:01.549036  762988 start.go:128] duration metric: took 23.413540127s to createHost
	I0920 18:49:01.549067  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.551495  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.551851  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.551881  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.552018  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.552201  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.552360  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.552475  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.552663  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:01.552890  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:01.552905  762988 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:49:01.655748  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858141.628739337
	
	I0920 18:49:01.655773  762988 fix.go:216] guest clock: 1726858141.628739337
	I0920 18:49:01.655781  762988 fix.go:229] Guest: 2024-09-20 18:49:01.628739337 +0000 UTC Remote: 2024-09-20 18:49:01.549050778 +0000 UTC m=+142.798112058 (delta=79.688559ms)
	I0920 18:49:01.655798  762988 fix.go:200] guest clock delta is within tolerance: 79.688559ms
	I0920 18:49:01.655803  762988 start.go:83] releasing machines lock for "ha-525790-m03", held for 23.520508822s
	I0920 18:49:01.655836  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.656125  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetIP
	I0920 18:49:01.658823  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.659297  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.659334  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.661900  762988 out.go:177] * Found network options:
	I0920 18:49:01.663362  762988 out.go:177]   - NO_PROXY=192.168.39.149,192.168.39.246
	W0920 18:49:01.664757  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 18:49:01.664778  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:49:01.664795  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.665398  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.665614  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.665705  762988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:49:01.665745  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	W0920 18:49:01.665812  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 18:49:01.665852  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:49:01.665930  762988 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:49:01.665957  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.668602  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.668630  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.669063  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.669134  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.669160  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.669251  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.669405  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.669623  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.669648  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.669763  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.669772  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.669900  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.669898  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:49:01.670073  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:49:01.914294  762988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:49:01.920631  762988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:49:01.920746  762988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:49:01.939203  762988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:49:01.939233  762988 start.go:495] detecting cgroup driver to use...
	I0920 18:49:01.939298  762988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:49:01.956879  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:49:01.972680  762988 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:49:01.972737  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:49:01.986983  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:49:02.002057  762988 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:49:02.127309  762988 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:49:02.284949  762988 docker.go:233] disabling docker service ...
	I0920 18:49:02.285026  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:49:02.300753  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:49:02.314717  762988 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:49:02.455235  762988 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:49:02.575677  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:49:02.589417  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:49:02.609243  762988 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:49:02.609306  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.619812  762988 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:49:02.619883  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.630268  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.640696  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.651017  762988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:49:02.661779  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.672169  762988 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.689257  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.699324  762988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:49:02.708522  762988 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:49:02.708581  762988 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:49:02.724380  762988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:49:02.735250  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:49:02.845773  762988 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:49:02.940137  762988 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:49:02.940234  762988 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:49:02.945137  762988 start.go:563] Will wait 60s for crictl version
	I0920 18:49:02.945195  762988 ssh_runner.go:195] Run: which crictl
	I0920 18:49:02.949025  762988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:49:02.985466  762988 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:49:02.985563  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:49:03.014070  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:49:03.043847  762988 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:49:03.045096  762988 out.go:177]   - env NO_PROXY=192.168.39.149
	I0920 18:49:03.046434  762988 out.go:177]   - env NO_PROXY=192.168.39.149,192.168.39.246
	I0920 18:49:03.047542  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetIP
	I0920 18:49:03.050349  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:03.050680  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:03.050706  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:03.050945  762988 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:49:03.055055  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:49:03.067151  762988 mustload.go:65] Loading cluster: ha-525790
	I0920 18:49:03.067360  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:49:03.067653  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:49:03.067702  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:49:03.083141  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I0920 18:49:03.083620  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:49:03.084155  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:49:03.084195  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:49:03.084513  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:49:03.084805  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:49:03.086455  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:49:03.086791  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:49:03.086828  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:49:03.102141  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39347
	I0920 18:49:03.102510  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:49:03.103060  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:49:03.103086  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:49:03.103433  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:49:03.103638  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:49:03.103800  762988 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.105
	I0920 18:49:03.103812  762988 certs.go:194] generating shared ca certs ...
	I0920 18:49:03.103827  762988 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:49:03.103970  762988 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:49:03.104025  762988 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:49:03.104040  762988 certs.go:256] generating profile certs ...
	I0920 18:49:03.104161  762988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:49:03.104187  762988 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.482e4680
	I0920 18:49:03.104203  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.482e4680 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.246 192.168.39.105 192.168.39.254]
	I0920 18:49:03.247720  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.482e4680 ...
	I0920 18:49:03.247759  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.482e4680: {Name:mk130da53fe193e08a7298b921e0e7264fd28276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:49:03.247934  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.482e4680 ...
	I0920 18:49:03.247946  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.482e4680: {Name:mk01fbdfb06a85f266d7928f14dec501e347df1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:49:03.248017  762988 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.482e4680 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:49:03.248149  762988 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.482e4680 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:49:03.248278  762988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:49:03.248294  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:49:03.248307  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:49:03.248321  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:49:03.248333  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:49:03.248345  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:49:03.248357  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:49:03.248369  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:49:03.270972  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:49:03.271068  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:49:03.271105  762988 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:49:03.271116  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:49:03.271137  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:49:03.271158  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:49:03.271180  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:49:03.271215  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:49:03.271243  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:49:03.271257  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:49:03.271268  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:49:03.271305  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:49:03.274365  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:49:03.274796  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:49:03.274826  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:49:03.275040  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:49:03.275257  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:49:03.275432  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:49:03.275609  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:49:03.347244  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 18:49:03.352573  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 18:49:03.366074  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 18:49:03.370940  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0920 18:49:03.383525  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 18:49:03.387790  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 18:49:03.401524  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 18:49:03.406898  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0920 18:49:03.418198  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 18:49:03.422213  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 18:49:03.432483  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 18:49:03.436644  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 18:49:03.447720  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:49:03.473142  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:49:03.497800  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:49:03.522032  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:49:03.546357  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0920 18:49:03.569451  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:49:03.592748  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:49:03.618320  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:49:03.643316  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:49:03.669027  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:49:03.693106  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:49:03.717412  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 18:49:03.736210  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0920 18:49:03.752820  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 18:49:03.769208  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0920 18:49:03.786468  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 18:49:03.803392  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 18:49:03.819806  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 18:49:03.836525  762988 ssh_runner.go:195] Run: openssl version
	I0920 18:49:03.842244  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:49:03.852769  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:49:03.857540  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:49:03.857596  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:49:03.863268  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:49:03.873806  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:49:03.884262  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:49:03.888603  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:49:03.888657  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:49:03.894115  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:49:03.904764  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:49:03.915491  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:49:03.920009  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:49:03.920061  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:49:03.925625  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:49:03.936257  762988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:49:03.940216  762988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:49:03.940272  762988 kubeadm.go:934] updating node {m03 192.168.39.105 8443 v1.31.1 crio true true} ...
	I0920 18:49:03.940372  762988 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:49:03.940409  762988 kube-vip.go:115] generating kube-vip config ...
	I0920 18:49:03.940448  762988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:49:03.957917  762988 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:49:03.958005  762988 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:49:03.958067  762988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:49:03.967572  762988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 18:49:03.967624  762988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 18:49:03.976974  762988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 18:49:03.976987  762988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 18:49:03.977005  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:49:03.976978  762988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 18:49:03.977048  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:49:03.977060  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:49:03.977022  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:49:03.977160  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:49:03.986571  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 18:49:03.986605  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 18:49:03.986658  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 18:49:03.986692  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 18:49:04.010382  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:49:04.010507  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:49:04.099814  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 18:49:04.099870  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 18:49:04.872454  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 18:49:04.882387  762988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:49:04.899462  762988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:49:04.916731  762988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:49:04.933245  762988 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:49:04.937315  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:49:04.950503  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:49:05.076487  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:49:05.092667  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:49:05.093146  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:49:05.093208  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:49:05.109982  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37499
	I0920 18:49:05.110528  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:49:05.111155  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:49:05.111179  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:49:05.111484  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:49:05.111774  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:49:05.111942  762988 start.go:317] joinCluster: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:49:05.112135  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 18:49:05.112159  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:49:05.115062  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:49:05.115484  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:49:05.115515  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:49:05.115682  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:49:05.115883  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:49:05.116066  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:49:05.116238  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:49:05.305796  762988 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:49:05.305864  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 39ds8x.uncxzpvszbuvr57z --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-525790-m03 --control-plane --apiserver-advertise-address=192.168.39.105 --apiserver-bind-port=8443"
	I0920 18:49:27.719468  762988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 39ds8x.uncxzpvszbuvr57z --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-525790-m03 --control-plane --apiserver-advertise-address=192.168.39.105 --apiserver-bind-port=8443": (22.413569312s)
	I0920 18:49:27.719513  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 18:49:28.224417  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-525790-m03 minikube.k8s.io/updated_at=2024_09_20T18_49_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=ha-525790 minikube.k8s.io/primary=false
	I0920 18:49:28.363168  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-525790-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 18:49:28.483620  762988 start.go:319] duration metric: took 23.371650439s to joinCluster
	I0920 18:49:28.484099  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:49:28.484156  762988 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:49:28.485758  762988 out.go:177] * Verifying Kubernetes components...
	I0920 18:49:28.487390  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:49:28.832062  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:49:28.888819  762988 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:49:28.889070  762988 kapi.go:59] client config for ha-525790: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key", CAFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 18:49:28.889131  762988 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.149:8443
	I0920 18:49:28.889340  762988 node_ready.go:35] waiting up to 6m0s for node "ha-525790-m03" to be "Ready" ...
	I0920 18:49:28.889437  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:28.889450  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:28.889462  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:28.889469  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:28.893312  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:29.389975  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:29.390001  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:29.390011  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:29.390015  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:29.393538  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:29.890123  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:29.890149  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:29.890162  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:29.890171  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:29.894353  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:30.390136  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:30.390164  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:30.390176  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:30.390181  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:30.393957  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:30.890420  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:30.890442  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:30.890458  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:30.890462  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:30.895075  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:30.895862  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:31.389871  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:31.389893  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:31.389902  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:31.389907  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:31.393271  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:31.890390  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:31.890411  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:31.890419  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:31.890423  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:31.894048  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:32.389848  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:32.389870  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:32.389879  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:32.389884  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:32.393339  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:32.890299  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:32.890328  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:32.890338  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:32.890343  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:32.893810  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:33.390110  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:33.390140  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:33.390152  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:33.390157  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:33.393525  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:33.393988  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:33.890279  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:33.890305  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:33.890317  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:33.890326  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:33.894103  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:34.389629  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:34.389653  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:34.389661  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:34.389666  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:34.393423  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:34.889832  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:34.889861  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:34.889872  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:34.889878  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:34.894113  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:35.389632  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:35.389653  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:35.389661  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:35.389668  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:35.392384  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:35.890106  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:35.890141  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:35.890153  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:35.890158  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:35.893183  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:35.893799  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:36.390240  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:36.390262  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:36.390275  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:36.390280  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:36.394094  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:36.890179  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:36.890202  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:36.890211  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:36.890216  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:36.893745  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:37.389770  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:37.389795  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:37.389804  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:37.389810  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:37.393011  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:37.889970  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:37.889992  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:37.890000  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:37.890006  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:37.893447  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:37.893999  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:38.389862  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:38.389886  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:38.389894  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:38.389898  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:38.393578  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:38.889977  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:38.890002  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:38.890015  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:38.890023  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:38.894709  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:39.389961  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:39.389985  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:39.389994  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:39.389997  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:39.393445  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:39.889607  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:39.889639  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:39.889646  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:39.889650  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:39.893375  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:39.894029  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:40.389658  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:40.389687  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:40.389699  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:40.389716  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:40.393116  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:40.890100  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:40.890123  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:40.890130  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:40.890135  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:40.893347  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:41.389584  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:41.389611  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:41.389626  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:41.389630  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:41.393223  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:41.890328  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:41.890352  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:41.890361  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:41.890366  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:41.894247  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:41.894758  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:42.390094  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:42.390118  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:42.390125  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:42.390129  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:42.393818  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:42.890390  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:42.890413  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:42.890421  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:42.890426  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:42.893913  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:43.390304  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:43.390325  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.390334  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.390338  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.393629  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:43.394194  762988 node_ready.go:49] node "ha-525790-m03" has status "Ready":"True"
	I0920 18:49:43.394215  762988 node_ready.go:38] duration metric: took 14.504859113s for node "ha-525790-m03" to be "Ready" ...
	I0920 18:49:43.394227  762988 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:49:43.394317  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:43.394332  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.394342  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.394349  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.399934  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:49:43.406601  762988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.406680  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nfnkj
	I0920 18:49:43.406688  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.406695  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.406698  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.409686  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.410357  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:43.410375  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.410382  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.410387  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.413203  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.414003  762988 pod_ready.go:93] pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.414026  762988 pod_ready.go:82] duration metric: took 7.399649ms for pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.414037  762988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.414110  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rpcds
	I0920 18:49:43.414120  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.414132  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.414139  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.416709  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.417387  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:43.417403  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.417411  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.417414  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.419923  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.420442  762988 pod_ready.go:93] pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.420459  762988 pod_ready.go:82] duration metric: took 6.41605ms for pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.420467  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.420515  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790
	I0920 18:49:43.420523  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.420529  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.420533  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.422830  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.423442  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:43.423459  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.423470  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.423476  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.425740  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.426292  762988 pod_ready.go:93] pod "etcd-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.426309  762988 pod_ready.go:82] duration metric: took 5.837018ms for pod "etcd-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.426318  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.426372  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790-m02
	I0920 18:49:43.426378  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.426385  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.426392  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.428740  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.429271  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:43.429289  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.429295  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.429301  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.431315  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.431859  762988 pod_ready.go:93] pod "etcd-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.431880  762988 pod_ready.go:82] duration metric: took 5.554102ms for pod "etcd-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.431888  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.591305  762988 request.go:632] Waited for 159.354613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790-m03
	I0920 18:49:43.591397  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790-m03
	I0920 18:49:43.591408  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.591418  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.591426  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.594816  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:43.790451  762988 request.go:632] Waited for 194.957771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:43.790546  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:43.790557  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.790567  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.790572  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.793782  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:43.794516  762988 pod_ready.go:93] pod "etcd-ha-525790-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.794545  762988 pod_ready.go:82] duration metric: took 362.651207ms for pod "etcd-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.794561  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.990932  762988 request.go:632] Waited for 196.293385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790
	I0920 18:49:43.991032  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790
	I0920 18:49:43.991044  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.991055  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.991070  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.994301  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.191298  762988 request.go:632] Waited for 196.219991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:44.191370  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:44.191378  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.191385  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.191391  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.195180  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.195974  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:44.195997  762988 pod_ready.go:82] duration metric: took 401.428334ms for pod "kube-apiserver-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.196011  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.390919  762988 request.go:632] Waited for 194.788684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m02
	I0920 18:49:44.390990  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m02
	I0920 18:49:44.390995  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.391003  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.391008  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.394492  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.591289  762988 request.go:632] Waited for 196.078558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:44.591352  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:44.591358  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.591365  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.591370  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.595290  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.596291  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:44.596314  762988 pod_ready.go:82] duration metric: took 400.296135ms for pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.596325  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.790722  762988 request.go:632] Waited for 194.31856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m03
	I0920 18:49:44.790804  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m03
	I0920 18:49:44.790810  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.790818  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.790822  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.794357  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.990524  762988 request.go:632] Waited for 195.282104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:44.990631  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:44.990644  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.990655  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.990665  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.994191  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.994903  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:44.994929  762988 pod_ready.go:82] duration metric: took 398.597843ms for pod "kube-apiserver-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.994944  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.191368  762988 request.go:632] Waited for 196.335448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790
	I0920 18:49:45.191459  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790
	I0920 18:49:45.191467  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.191475  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.191483  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.195161  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:45.391240  762988 request.go:632] Waited for 195.352512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:45.391325  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:45.391333  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.391341  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.391346  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.396237  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:45.397053  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:45.397069  762988 pod_ready.go:82] duration metric: took 402.117627ms for pod "kube-controller-manager-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.397080  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.590744  762988 request.go:632] Waited for 193.581272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m02
	I0920 18:49:45.590855  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m02
	I0920 18:49:45.590865  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.590877  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.590883  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.594359  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:45.791023  762988 request.go:632] Waited for 195.208519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:45.791108  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:45.791116  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.791126  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.791131  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.794779  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:45.795437  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:45.795459  762988 pod_ready.go:82] duration metric: took 398.37091ms for pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.795469  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.990550  762988 request.go:632] Waited for 195.001281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m03
	I0920 18:49:45.990624  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m03
	I0920 18:49:45.990630  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.990638  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.990643  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.994052  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:46.191122  762988 request.go:632] Waited for 196.353155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:46.191247  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:46.191259  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.191268  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.191274  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.194216  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:46.194981  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:46.195002  762988 pod_ready.go:82] duration metric: took 399.526934ms for pod "kube-controller-manager-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.195013  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-958jz" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.390922  762988 request.go:632] Waited for 195.832956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-958jz
	I0920 18:49:46.391009  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-958jz
	I0920 18:49:46.391020  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.391029  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.391035  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.394008  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:46.591177  762988 request.go:632] Waited for 196.363553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:46.591252  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:46.591257  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.591267  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.591274  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.594463  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:46.595077  762988 pod_ready.go:93] pod "kube-proxy-958jz" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:46.595099  762988 pod_ready.go:82] duration metric: took 400.079203ms for pod "kube-proxy-958jz" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.595109  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dx9pg" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.791219  762988 request.go:632] Waited for 195.994883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dx9pg
	I0920 18:49:46.791280  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dx9pg
	I0920 18:49:46.791285  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.791294  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.791299  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.794750  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:46.990905  762988 request.go:632] Waited for 195.399247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:46.990977  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:46.990982  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.990990  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.990998  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.994578  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:46.995251  762988 pod_ready.go:93] pod "kube-proxy-dx9pg" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:46.995275  762988 pod_ready.go:82] duration metric: took 400.160371ms for pod "kube-proxy-dx9pg" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.995288  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sspfs" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.191109  762988 request.go:632] Waited for 195.732991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sspfs
	I0920 18:49:47.191198  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sspfs
	I0920 18:49:47.191209  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.191220  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.191229  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.194285  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:47.390397  762988 request.go:632] Waited for 195.278961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:47.390485  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:47.390494  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.390502  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.390509  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.394123  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:47.394634  762988 pod_ready.go:93] pod "kube-proxy-sspfs" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:47.394658  762988 pod_ready.go:82] duration metric: took 399.362351ms for pod "kube-proxy-sspfs" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.394668  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.590688  762988 request.go:632] Waited for 195.932452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790
	I0920 18:49:47.590750  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790
	I0920 18:49:47.590756  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.590766  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.590773  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.594088  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:47.791044  762988 request.go:632] Waited for 196.393517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:47.791127  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:47.791137  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.791151  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.791160  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.794795  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:47.795601  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:47.795620  762988 pod_ready.go:82] duration metric: took 400.94539ms for pod "kube-scheduler-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.795629  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.990769  762988 request.go:632] Waited for 195.033171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m02
	I0920 18:49:47.990860  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m02
	I0920 18:49:47.990871  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.990883  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.990894  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.994202  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.191063  762988 request.go:632] Waited for 196.257455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:48.191127  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:48.191134  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.191144  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.191149  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.194376  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.194886  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:48.194906  762988 pod_ready.go:82] duration metric: took 399.270985ms for pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:48.194915  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:48.390935  762988 request.go:632] Waited for 195.938247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m03
	I0920 18:49:48.391011  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m03
	I0920 18:49:48.391029  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.391064  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.391074  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.394097  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.591276  762988 request.go:632] Waited for 196.398543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:48.591340  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:48.591351  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.591359  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.591363  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.594456  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.595126  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:48.595147  762988 pod_ready.go:82] duration metric: took 400.225521ms for pod "kube-scheduler-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:48.595159  762988 pod_ready.go:39] duration metric: took 5.200916863s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:49:48.595173  762988 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:49:48.595224  762988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:49:48.611081  762988 api_server.go:72] duration metric: took 20.126887425s to wait for apiserver process to appear ...
	I0920 18:49:48.611105  762988 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:49:48.611130  762988 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0920 18:49:48.616371  762988 api_server.go:279] https://192.168.39.149:8443/healthz returned 200:
	ok
	I0920 18:49:48.616442  762988 round_trippers.go:463] GET https://192.168.39.149:8443/version
	I0920 18:49:48.616450  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.616461  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.616470  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.617373  762988 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 18:49:48.617437  762988 api_server.go:141] control plane version: v1.31.1
	I0920 18:49:48.617451  762988 api_server.go:131] duration metric: took 6.339029ms to wait for apiserver health ...
	I0920 18:49:48.617458  762988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:49:48.790943  762988 request.go:632] Waited for 173.409092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:48.791019  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:48.791024  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.791031  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.791035  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.799193  762988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 18:49:48.807423  762988 system_pods.go:59] 24 kube-system pods found
	I0920 18:49:48.807457  762988 system_pods.go:61] "coredns-7c65d6cfc9-nfnkj" [7994989d-6bfa-4d25-b7b7-662d2e6c742c] Running
	I0920 18:49:48.807464  762988 system_pods.go:61] "coredns-7c65d6cfc9-rpcds" [7db58219-7147-4a45-b233-ef3c698566ef] Running
	I0920 18:49:48.807470  762988 system_pods.go:61] "etcd-ha-525790" [f23cd40e-ac8d-451b-9bf9-2ef5d62ef4b6] Running
	I0920 18:49:48.807476  762988 system_pods.go:61] "etcd-ha-525790-m02" [5a29103e-6da3-40d1-be3c-58fdc0f28b54] Running
	I0920 18:49:48.807480  762988 system_pods.go:61] "etcd-ha-525790-m03" [33df920f-e346-4613-af3b-67042a9db421] Running
	I0920 18:49:48.807485  762988 system_pods.go:61] "kindnet-8glgp" [f462782e-1ff6-410a-8359-de3360d380b0] Running
	I0920 18:49:48.807489  762988 system_pods.go:61] "kindnet-9qbm6" [87e8ae18-a561-48ec-9835-27446b6917d3] Running
	I0920 18:49:48.807493  762988 system_pods.go:61] "kindnet-j5mmq" [9ecd60f9-bfbf-4292-8449-869dd3afa02c] Running
	I0920 18:49:48.807498  762988 system_pods.go:61] "kube-apiserver-ha-525790" [0e3563fd-5185-4dc6-8d9b-a7d954b96c8d] Running
	I0920 18:49:48.807503  762988 system_pods.go:61] "kube-apiserver-ha-525790-m02" [b3966e2e-ce3d-4916-b73c-0d80cd1793f0] Running
	I0920 18:49:48.807508  762988 system_pods.go:61] "kube-apiserver-ha-525790-m03" [7649543a-3c54-4627-8a0a-bc1945712ad7] Running
	I0920 18:49:48.807514  762988 system_pods.go:61] "kube-controller-manager-ha-525790" [1d695853-6a7e-487d-a52b-9aceb1fc9ff3] Running
	I0920 18:49:48.807519  762988 system_pods.go:61] "kube-controller-manager-ha-525790-m02" [090c1833-3800-4e13-b9a7-c03680f3d55d] Running
	I0920 18:49:48.807524  762988 system_pods.go:61] "kube-controller-manager-ha-525790-m03" [5e675da3-2dd4-417a-a6f8-d4fe90da0ac0] Running
	I0920 18:49:48.807529  762988 system_pods.go:61] "kube-proxy-958jz" [46603403-eb82-4f15-a1da-da62194a072f] Running
	I0920 18:49:48.807535  762988 system_pods.go:61] "kube-proxy-dx9pg" [aa873f4e-a8f0-49ab-95e9-d81d15b650f5] Running
	I0920 18:49:48.807543  762988 system_pods.go:61] "kube-proxy-sspfs" [15203515-fc45-4624-b97e-8ec247f01e2d] Running
	I0920 18:49:48.807550  762988 system_pods.go:61] "kube-scheduler-ha-525790" [8cb7e23e-c1d1-4753-9758-b17ef9fd08d7] Running
	I0920 18:49:48.807556  762988 system_pods.go:61] "kube-scheduler-ha-525790-m02" [dc9a5561-5d41-445d-a0ba-de3b2405f821] Running
	I0920 18:49:48.807562  762988 system_pods.go:61] "kube-scheduler-ha-525790-m03" [729fa556-4301-49a9-8ed0-506ecb3a8b76] Running
	I0920 18:49:48.807567  762988 system_pods.go:61] "kube-vip-ha-525790" [0b318b1e-7a85-4c8c-8a5a-2fee226d7702] Running
	I0920 18:49:48.807576  762988 system_pods.go:61] "kube-vip-ha-525790-m02" [f2316231-5c1d-4bf2-ae62-5a4202b5818b] Running
	I0920 18:49:48.807581  762988 system_pods.go:61] "kube-vip-ha-525790-m03" [3050094c-de2a-449f-866c-0e8ddceb697d] Running
	I0920 18:49:48.807587  762988 system_pods.go:61] "storage-provisioner" [ea6bf34f-c1f7-4216-a61f-be30846c991b] Running
	I0920 18:49:48.807599  762988 system_pods.go:74] duration metric: took 190.132126ms to wait for pod list to return data ...
	I0920 18:49:48.807613  762988 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:49:48.991230  762988 request.go:632] Waited for 183.520385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:49:48.991298  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:49:48.991305  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.991315  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.991320  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.994457  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.994600  762988 default_sa.go:45] found service account: "default"
	I0920 18:49:48.994616  762988 default_sa.go:55] duration metric: took 186.997115ms for default service account to be created ...
	I0920 18:49:48.994626  762988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:49:49.191090  762988 request.go:632] Waited for 196.382893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:49.191150  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:49.191156  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:49.191167  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:49.191172  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:49.196609  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:49:49.203953  762988 system_pods.go:86] 24 kube-system pods found
	I0920 18:49:49.203984  762988 system_pods.go:89] "coredns-7c65d6cfc9-nfnkj" [7994989d-6bfa-4d25-b7b7-662d2e6c742c] Running
	I0920 18:49:49.203991  762988 system_pods.go:89] "coredns-7c65d6cfc9-rpcds" [7db58219-7147-4a45-b233-ef3c698566ef] Running
	I0920 18:49:49.203997  762988 system_pods.go:89] "etcd-ha-525790" [f23cd40e-ac8d-451b-9bf9-2ef5d62ef4b6] Running
	I0920 18:49:49.204001  762988 system_pods.go:89] "etcd-ha-525790-m02" [5a29103e-6da3-40d1-be3c-58fdc0f28b54] Running
	I0920 18:49:49.204005  762988 system_pods.go:89] "etcd-ha-525790-m03" [33df920f-e346-4613-af3b-67042a9db421] Running
	I0920 18:49:49.204010  762988 system_pods.go:89] "kindnet-8glgp" [f462782e-1ff6-410a-8359-de3360d380b0] Running
	I0920 18:49:49.204015  762988 system_pods.go:89] "kindnet-9qbm6" [87e8ae18-a561-48ec-9835-27446b6917d3] Running
	I0920 18:49:49.204020  762988 system_pods.go:89] "kindnet-j5mmq" [9ecd60f9-bfbf-4292-8449-869dd3afa02c] Running
	I0920 18:49:49.204026  762988 system_pods.go:89] "kube-apiserver-ha-525790" [0e3563fd-5185-4dc6-8d9b-a7d954b96c8d] Running
	I0920 18:49:49.204033  762988 system_pods.go:89] "kube-apiserver-ha-525790-m02" [b3966e2e-ce3d-4916-b73c-0d80cd1793f0] Running
	I0920 18:49:49.204042  762988 system_pods.go:89] "kube-apiserver-ha-525790-m03" [7649543a-3c54-4627-8a0a-bc1945712ad7] Running
	I0920 18:49:49.204048  762988 system_pods.go:89] "kube-controller-manager-ha-525790" [1d695853-6a7e-487d-a52b-9aceb1fc9ff3] Running
	I0920 18:49:49.204061  762988 system_pods.go:89] "kube-controller-manager-ha-525790-m02" [090c1833-3800-4e13-b9a7-c03680f3d55d] Running
	I0920 18:49:49.204067  762988 system_pods.go:89] "kube-controller-manager-ha-525790-m03" [5e675da3-2dd4-417a-a6f8-d4fe90da0ac0] Running
	I0920 18:49:49.204073  762988 system_pods.go:89] "kube-proxy-958jz" [46603403-eb82-4f15-a1da-da62194a072f] Running
	I0920 18:49:49.204081  762988 system_pods.go:89] "kube-proxy-dx9pg" [aa873f4e-a8f0-49ab-95e9-d81d15b650f5] Running
	I0920 18:49:49.204086  762988 system_pods.go:89] "kube-proxy-sspfs" [15203515-fc45-4624-b97e-8ec247f01e2d] Running
	I0920 18:49:49.204093  762988 system_pods.go:89] "kube-scheduler-ha-525790" [8cb7e23e-c1d1-4753-9758-b17ef9fd08d7] Running
	I0920 18:49:49.204097  762988 system_pods.go:89] "kube-scheduler-ha-525790-m02" [dc9a5561-5d41-445d-a0ba-de3b2405f821] Running
	I0920 18:49:49.204103  762988 system_pods.go:89] "kube-scheduler-ha-525790-m03" [729fa556-4301-49a9-8ed0-506ecb3a8b76] Running
	I0920 18:49:49.204107  762988 system_pods.go:89] "kube-vip-ha-525790" [0b318b1e-7a85-4c8c-8a5a-2fee226d7702] Running
	I0920 18:49:49.204115  762988 system_pods.go:89] "kube-vip-ha-525790-m02" [f2316231-5c1d-4bf2-ae62-5a4202b5818b] Running
	I0920 18:49:49.204121  762988 system_pods.go:89] "kube-vip-ha-525790-m03" [3050094c-de2a-449f-866c-0e8ddceb697d] Running
	I0920 18:49:49.204127  762988 system_pods.go:89] "storage-provisioner" [ea6bf34f-c1f7-4216-a61f-be30846c991b] Running
	I0920 18:49:49.204137  762988 system_pods.go:126] duration metric: took 209.50314ms to wait for k8s-apps to be running ...
	I0920 18:49:49.204149  762988 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:49:49.204205  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:49:49.220678  762988 system_svc.go:56] duration metric: took 16.519226ms WaitForService to wait for kubelet
	I0920 18:49:49.220713  762988 kubeadm.go:582] duration metric: took 20.736522024s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:49:49.220737  762988 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:49:49.391073  762988 request.go:632] Waited for 170.223638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes
	I0920 18:49:49.391144  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes
	I0920 18:49:49.391152  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:49.391163  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:49.391185  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:49.395131  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:49.396058  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:49:49.396082  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:49:49.396097  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:49:49.396102  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:49:49.396107  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:49:49.396112  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:49:49.396118  762988 node_conditions.go:105] duration metric: took 175.374616ms to run NodePressure ...
	I0920 18:49:49.396133  762988 start.go:241] waiting for startup goroutines ...
	I0920 18:49:49.396165  762988 start.go:255] writing updated cluster config ...
	I0920 18:49:49.396463  762988 ssh_runner.go:195] Run: rm -f paused
	I0920 18:49:49.451056  762988 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:49:49.453054  762988 out.go:177] * Done! kubectl is now configured to use "ha-525790" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.211033839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858411211010986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dda397ad-d32d-4946-863a-a0280e8f1741 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.211631907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecf99ca9-a46b-430d-a926-edb8cf2fef7b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.211685446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecf99ca9-a46b-430d-a926-edb8cf2fef7b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.211929810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858192106122080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90,PodSandboxId:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858057039915142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056980739363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056983536331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6b
fa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268580
44669106744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858044313140306,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc,PodSandboxId:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858035446566658,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb,PodSandboxId:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858033110408572,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858033123239626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858033076459540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72,PodSandboxId:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858033054127944,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ecf99ca9-a46b-430d-a926-edb8cf2fef7b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.265690665Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e88423e-905c-4b14-a60d-8dca9b4b2ed3 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.265761208Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e88423e-905c-4b14-a60d-8dca9b4b2ed3 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.266913255Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52400721-9e6d-4f79-8494-17f02425272b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.267379494Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858411267355288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52400721-9e6d-4f79-8494-17f02425272b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.267863207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a175415d-23d6-44e0-95fa-0d442c2d8fb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.267924109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a175415d-23d6-44e0-95fa-0d442c2d8fb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.268177458Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858192106122080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90,PodSandboxId:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858057039915142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056980739363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056983536331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6b
fa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268580
44669106744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858044313140306,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc,PodSandboxId:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858035446566658,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb,PodSandboxId:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858033110408572,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858033123239626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858033076459540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72,PodSandboxId:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858033054127944,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a175415d-23d6-44e0-95fa-0d442c2d8fb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.321130528Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd2183fa-e183-4b2b-841c-19b41e704c59 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.321209277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd2183fa-e183-4b2b-841c-19b41e704c59 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.322642097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da3c51c7-d18c-411e-bfc5-5b8e6ca448e3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.323209976Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858411323179969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da3c51c7-d18c-411e-bfc5-5b8e6ca448e3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.324180485Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f0432e9-d7c4-4c66-a2e1-4e077ba9bafa name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.324235258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f0432e9-d7c4-4c66-a2e1-4e077ba9bafa name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.324590250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858192106122080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90,PodSandboxId:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858057039915142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056980739363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056983536331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6b
fa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268580
44669106744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858044313140306,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc,PodSandboxId:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858035446566658,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb,PodSandboxId:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858033110408572,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858033123239626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858033076459540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72,PodSandboxId:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858033054127944,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f0432e9-d7c4-4c66-a2e1-4e077ba9bafa name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.369316046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8f5559c-baae-4d9a-8f1b-532abd9cc83f name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.369422034Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8f5559c-baae-4d9a-8f1b-532abd9cc83f name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.377895104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df112069-8d5c-454f-bf2f-87e0ad68e572 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.378451748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858411378416169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df112069-8d5c-454f-bf2f-87e0ad68e572 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.378961455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad509907-b1ed-444d-ab90-65708b76a59e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.379016476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad509907-b1ed-444d-ab90-65708b76a59e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:31 ha-525790 crio[657]: time="2024-09-20 18:53:31.379253959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858192106122080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90,PodSandboxId:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858057039915142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056980739363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056983536331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6b
fa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268580
44669106744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858044313140306,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc,PodSandboxId:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858035446566658,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb,PodSandboxId:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858033110408572,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858033123239626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858033076459540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72,PodSandboxId:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858033054127944,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad509907-b1ed-444d-ab90-65708b76a59e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	344b03b51dddb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   125671e39b996       busybox-7dff88458-z26jr
	57fdde7a007ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   f2f3faeb3feb3       storage-provisioner
	172e8f75d2a84       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   5dbd6acffd5c5       coredns-7c65d6cfc9-nfnkj
	3dff404b6ad2a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   34517f9f64c86       coredns-7c65d6cfc9-rpcds
	5579930bef0fc       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   64136f65f6d34       kindnet-9qbm6
	3d469134674c2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   2e440a5ac73b7       kube-proxy-958jz
	c704a3be19bcb       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   afc309e0288a6       kube-vip-ha-525790
	7d0496391eb85       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   fae09dfcf3d6f       kube-scheduler-ha-525790
	1196adfd11996       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   4ed8fcb6c5197       kube-apiserver-ha-525790
	bcca29b119984       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   17818940c2036       etcd-ha-525790
	49582cb9e0724       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   ee2f4d881a424       kube-controller-manager-ha-525790
	
	
	==> coredns [172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1] <==
	[INFO] 10.244.0.4:52678 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000127756s
	[INFO] 10.244.1.2:49868 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196016s
	[INFO] 10.244.1.2:54874 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00387198s
	[INFO] 10.244.1.2:39870 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203758s
	[INFO] 10.244.1.2:47679 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000185456s
	[INFO] 10.244.1.2:49534 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164113s
	[INFO] 10.244.2.2:50032 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167479s
	[INFO] 10.244.2.2:33413 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001865571s
	[INFO] 10.244.0.4:38374 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010475s
	[INFO] 10.244.0.4:44676 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170058s
	[INFO] 10.244.0.4:54182 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123082s
	[INFO] 10.244.0.4:52067 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108075s
	[INFO] 10.244.1.2:36885 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133944s
	[INFO] 10.244.2.2:48327 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127372s
	[INFO] 10.244.2.2:52262 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160755s
	[INFO] 10.244.0.4:44171 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111758s
	[INFO] 10.244.1.2:36220 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196033s
	[INFO] 10.244.1.2:33859 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222322s
	[INFO] 10.244.1.2:55349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158431s
	[INFO] 10.244.2.2:37976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138385s
	[INFO] 10.244.2.2:56378 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000191303s
	[INFO] 10.244.2.2:54246 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117607s
	[INFO] 10.244.0.4:53115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116565s
	[INFO] 10.244.0.4:49608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095821s
	[INFO] 10.244.0.4:60862 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111997s
	
	
	==> coredns [3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e] <==
	[INFO] 10.244.0.4:45127 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001808433s
	[INFO] 10.244.1.2:43604 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003790448s
	[INFO] 10.244.1.2:40634 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000273503s
	[INFO] 10.244.1.2:53633 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177331s
	[INFO] 10.244.2.2:45376 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253726s
	[INFO] 10.244.2.2:42750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000311517s
	[INFO] 10.244.2.2:42748 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319529s
	[INFO] 10.244.2.2:49203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190348s
	[INFO] 10.244.2.2:44849 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019366s
	[INFO] 10.244.2.2:52186 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103082s
	[INFO] 10.244.0.4:58300 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140735s
	[INFO] 10.244.0.4:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001702673s
	[INFO] 10.244.0.4:33721 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001170599s
	[INFO] 10.244.0.4:42180 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061647s
	[INFO] 10.244.1.2:49177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333372s
	[INFO] 10.244.1.2:57192 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147894s
	[INFO] 10.244.1.2:59125 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095482s
	[INFO] 10.244.2.2:50879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019818s
	[INFO] 10.244.2.2:47467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096359s
	[INFO] 10.244.0.4:54464 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087148s
	[INFO] 10.244.0.4:40326 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011895s
	[INFO] 10.244.0.4:46142 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071583s
	[INFO] 10.244.1.2:50168 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000224622s
	[INFO] 10.244.2.2:50611 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117577s
	[INFO] 10.244.0.4:57391 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000320119s
	
	
	==> describe nodes <==
	Name:               ha-525790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_47_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:47:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:53:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:50:23 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:50:23 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:50:23 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:50:23 +0000   Fri, 20 Sep 2024 18:47:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-525790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3f2b96a8819496a94e034cf4adf7a85
	  System UUID:                d3f2b96a-8819-496a-94e0-34cf4adf7a85
	  Boot ID:                    02f79ecd-567f-4683-83ce-59afb46feab6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z26jr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 coredns-7c65d6cfc9-nfnkj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m7s
	  kube-system                 coredns-7c65d6cfc9-rpcds             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m7s
	  kube-system                 etcd-ha-525790                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m12s
	  kube-system                 kindnet-9qbm6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m8s
	  kube-system                 kube-apiserver-ha-525790             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-controller-manager-ha-525790    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-proxy-958jz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-scheduler-ha-525790             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-vip-ha-525790                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m6s   kube-proxy       
	  Normal  Starting                 6m12s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m12s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m12s  kubelet          Node ha-525790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m12s  kubelet          Node ha-525790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m12s  kubelet          Node ha-525790 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m8s   node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal  NodeReady                5m55s  kubelet          Node ha-525790 status is now: NodeReady
	  Normal  RegisteredNode           5m9s   node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal  RegisteredNode           3m58s  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	
	
	Name:               ha-525790-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_48_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:48:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:50:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 18:50:16 +0000   Fri, 20 Sep 2024 18:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 18:50:16 +0000   Fri, 20 Sep 2024 18:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 18:50:16 +0000   Fri, 20 Sep 2024 18:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 18:50:16 +0000   Fri, 20 Sep 2024 18:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-525790-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dbde4511fc24bbcb1281f7b7d6ff24f
	  System UUID:                1dbde451-1fc2-4bbc-b128-1f7b7d6ff24f
	  Boot ID:                    9ec76d35-ca9a-483c-b479-9d99ec8feedc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7jtss                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-525790-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m16s
	  kube-system                 kindnet-8glgp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m18s
	  kube-system                 kube-apiserver-ha-525790-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-controller-manager-ha-525790-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-proxy-sspfs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-scheduler-ha-525790-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-vip-ha-525790-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node ha-525790-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node ha-525790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node ha-525790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  NodeNotReady             113s                   node-controller  Node ha-525790-m02 status is now: NodeNotReady
	
	
	Name:               ha-525790-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_49_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:49:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:53:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:49:54 +0000   Fri, 20 Sep 2024 18:49:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:49:54 +0000   Fri, 20 Sep 2024 18:49:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:49:54 +0000   Fri, 20 Sep 2024 18:49:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:49:54 +0000   Fri, 20 Sep 2024 18:49:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-525790-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 007556c5fa674bcd927152e3b0cca9b2
	  System UUID:                007556c5-fa67-4bcd-9271-52e3b0cca9b2
	  Boot ID:                    2d4db773-7cb0-4bef-b28d-d6863649acb9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jmx4g                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-525790-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m5s
	  kube-system                 kindnet-j5mmq                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m7s
	  kube-system                 kube-apiserver-ha-525790-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-ha-525790-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-proxy-dx9pg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-scheduler-ha-525790-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-vip-ha-525790-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m8s)  kubelet          Node ha-525790-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m8s)  kubelet          Node ha-525790-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m8s)  kubelet          Node ha-525790-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	
	
	Name:               ha-525790-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_50_26_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:50:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:53:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:50:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:50:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:50:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:50:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-525790-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c58d814e5e5d49b699d9f977eb54ff58
	  System UUID:                c58d814e-5e5d-49b6-99d9-f977eb54ff58
	  Boot ID:                    69924ac5-b6f2-4ddd-bd0d-fa3c683681d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-df8hf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m5s
	  kube-system                 kube-proxy-w98cx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m5s (x2 over 3m6s)  kubelet          Node ha-525790-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m5s (x2 over 3m6s)  kubelet          Node ha-525790-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m5s (x2 over 3m6s)  kubelet          Node ha-525790-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-525790-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 18:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049615] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041335] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.781215] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.493789] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.593513] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep20 18:47] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.053987] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058272] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.180542] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.143015] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.280287] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.923962] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +3.905808] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.064972] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.290695] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.091789] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.472602] kauditd_printk_skb: 36 callbacks suppressed
	[ +11.974718] kauditd_printk_skb: 23 callbacks suppressed
	[Sep20 18:48] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93] <==
	{"level":"warn","ts":"2024-09-20T18:53:31.376479Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.476898Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.575913Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.621003Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.628389Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.631819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.643344Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.649193Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.656226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.659464Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.663161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.671450Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.676792Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.677731Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.682850Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.685739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.692078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.699413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.704584Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.709822Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.713831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.716878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.792009Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.794572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:31.799671Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:53:31 up 6 min,  0 users,  load average: 0.44, 0.23, 0.11
	Linux ha-525790 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98] <==
	I0920 18:52:55.880188       1 main.go:299] handling current node
	I0920 18:53:05.885518       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:53:05.885574       1 main.go:299] handling current node
	I0920 18:53:05.885598       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:53:05.885604       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:53:05.885762       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:53:05.885786       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:53:05.885836       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:53:05.885842       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 18:53:15.886307       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:53:15.886406       1 main.go:299] handling current node
	I0920 18:53:15.886461       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:53:15.886488       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:53:15.886631       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:53:15.886653       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:53:15.886712       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:53:15.886731       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 18:53:25.880388       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:53:25.880418       1 main.go:299] handling current node
	I0920 18:53:25.880431       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:53:25.880437       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:53:25.880623       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:53:25.880629       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:53:25.880667       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:53:25.880672       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb] <==
	W0920 18:47:18.009766       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149]
	I0920 18:47:18.010784       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:47:18.015641       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:47:18.249854       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:47:19.683867       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:47:19.709897       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 18:47:19.867045       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:47:23.355786       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0920 18:47:23.802179       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0920 18:49:53.563053       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46192: use of closed network connection
	E0920 18:49:53.772052       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46208: use of closed network connection
	E0920 18:49:53.971905       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46230: use of closed network connection
	E0920 18:49:54.183484       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46258: use of closed network connection
	E0920 18:49:54.358996       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46282: use of closed network connection
	E0920 18:49:54.568631       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46306: use of closed network connection
	E0920 18:49:54.751815       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46320: use of closed network connection
	E0920 18:49:54.931094       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46346: use of closed network connection
	E0920 18:49:55.134164       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46362: use of closed network connection
	E0920 18:49:55.422343       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46396: use of closed network connection
	E0920 18:49:55.606742       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46420: use of closed network connection
	E0920 18:49:55.788879       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46442: use of closed network connection
	E0920 18:49:55.968453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46450: use of closed network connection
	E0920 18:49:56.152146       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46460: use of closed network connection
	E0920 18:49:56.335452       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46464: use of closed network connection
	W0920 18:51:07.982250       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.105 192.168.39.149]
	
	
	==> kube-controller-manager [49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72] <==
	I0920 18:50:26.211532       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-525790-m04" podCIDRs=["10.244.3.0/24"]
	I0920 18:50:26.211587       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:26.211616       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:26.225025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:26.521754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:26.959047       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:27.339450       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:28.189228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:28.189762       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-525790-m04"
	I0920 18:50:28.268460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:28.721421       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:28.749109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:36.536189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:44.973968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:44.974514       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-525790-m04"
	I0920 18:50:44.992518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:47.269588       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:57.000828       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:51:38.216141       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-525790-m04"
	I0920 18:51:38.216594       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 18:51:38.240377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 18:51:38.269433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.448755ms"
	I0920 18:51:38.269538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.42µs"
	I0920 18:51:38.804819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 18:51:43.466404       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	
	
	==> kube-proxy [3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:47:24.817372       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:47:24.843820       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.149"]
	E0920 18:47:24.843948       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:47:24.955225       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:47:24.955317       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:47:24.955347       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:47:24.958548       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:47:24.959874       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:47:24.959905       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:47:24.962813       1 config.go:199] "Starting service config controller"
	I0920 18:47:24.965782       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:47:24.965817       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:47:24.968165       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:47:24.968295       1 config.go:328] "Starting node config controller"
	I0920 18:47:24.968302       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:47:25.067459       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:47:25.068474       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:47:25.068496       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706] <==
	E0920 18:49:50.397182       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jmx4g\": pod busybox-7dff88458-jmx4g is already assigned to node \"ha-525790-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jmx4g" node="ha-525790-m03"
	E0920 18:49:50.397248       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 223d79ec-368f-47a1-aa7b-26d153195e57(default/busybox-7dff88458-jmx4g) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-jmx4g"
	E0920 18:49:50.397330       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jmx4g\": pod busybox-7dff88458-jmx4g is already assigned to node \"ha-525790-m03\"" pod="default/busybox-7dff88458-jmx4g"
	I0920 18:49:50.397369       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jmx4g" node="ha-525790-m03"
	E0920 18:49:50.409140       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z26jr\": pod busybox-7dff88458-z26jr is already assigned to node \"ha-525790\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-z26jr" node="ha-525790"
	E0920 18:49:50.409195       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3a3cda3d-ccab-4483-98e6-50d779cc3354(default/busybox-7dff88458-z26jr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-z26jr"
	E0920 18:49:50.409213       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z26jr\": pod busybox-7dff88458-z26jr is already assigned to node \"ha-525790\"" pod="default/busybox-7dff88458-z26jr"
	I0920 18:49:50.409243       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-z26jr" node="ha-525790"
	E0920 18:49:50.532066       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-pt85x is already present in the active queue" pod="default/busybox-7dff88458-pt85x"
	E0920 18:50:26.262797       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fz5b4\": pod kindnet-fz5b4 is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fz5b4" node="ha-525790-m04"
	E0920 18:50:26.262881       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e8309f8d-3b06-4e9f-9bad-e0745dd2b30c(kube-system/kindnet-fz5b4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fz5b4"
	E0920 18:50:26.262903       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fz5b4\": pod kindnet-fz5b4 is already assigned to node \"ha-525790-m04\"" pod="kube-system/kindnet-fz5b4"
	I0920 18:50:26.262924       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fz5b4" node="ha-525790-m04"
	E0920 18:50:26.263223       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-w98cx\": pod kube-proxy-w98cx is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-w98cx" node="ha-525790-m04"
	E0920 18:50:26.263412       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cd3e68cf-e7ed-47fc-ae4b-c701394a8c1f(kube-system/kube-proxy-w98cx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-w98cx"
	E0920 18:50:26.263548       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-w98cx\": pod kube-proxy-w98cx is already assigned to node \"ha-525790-m04\"" pod="kube-system/kube-proxy-w98cx"
	I0920 18:50:26.263699       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w98cx" node="ha-525790-m04"
	E0920 18:50:26.297985       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298064       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9ff40332-cdad-4e9f-99ca-28d1271713a8(kube-system/kindnet-hwgsh) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-hwgsh"
	E0920 18:50:26.298079       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" pod="kube-system/kindnet-hwgsh"
	I0920 18:50:26.298095       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298461       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	E0920 18:50:26.298512       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 340d5abf-2e79-4cc0-8f1f-130c1e176259(kube-system/kube-proxy-rh89s) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rh89s"
	E0920 18:50:26.298529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" pod="kube-system/kube-proxy-rh89s"
	I0920 18:50:26.298548       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	
	
	==> kubelet <==
	Sep 20 18:52:19 ha-525790 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:52:19 ha-525790 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:52:19 ha-525790 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:52:19 ha-525790 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:52:19 ha-525790 kubelet[1305]: E0920 18:52:19.763018    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858339762333652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:19 ha-525790 kubelet[1305]: E0920 18:52:19.763043    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858339762333652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:29 ha-525790 kubelet[1305]: E0920 18:52:29.765356    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858349764077984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:29 ha-525790 kubelet[1305]: E0920 18:52:29.765949    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858349764077984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:39 ha-525790 kubelet[1305]: E0920 18:52:39.767540    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858359766941786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:39 ha-525790 kubelet[1305]: E0920 18:52:39.767585    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858359766941786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:49 ha-525790 kubelet[1305]: E0920 18:52:49.770639    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858369770138616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:49 ha-525790 kubelet[1305]: E0920 18:52:49.770662    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858369770138616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:59 ha-525790 kubelet[1305]: E0920 18:52:59.772133    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858379771879043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:59 ha-525790 kubelet[1305]: E0920 18:52:59.772178    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858379771879043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:09 ha-525790 kubelet[1305]: E0920 18:53:09.773633    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858389773387666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:09 ha-525790 kubelet[1305]: E0920 18:53:09.773655    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858389773387666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:19 ha-525790 kubelet[1305]: E0920 18:53:19.642578    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:53:19 ha-525790 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:53:19 ha-525790 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:53:19 ha-525790 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:53:19 ha-525790 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:53:19 ha-525790 kubelet[1305]: E0920 18:53:19.776518    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858399775795306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:19 ha-525790 kubelet[1305]: E0920 18:53:19.776559    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858399775795306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:29 ha-525790 kubelet[1305]: E0920 18:53:29.780049    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858409779796104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:29 ha-525790 kubelet[1305]: E0920 18:53:29.780095    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858409779796104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-525790 -n ha-525790
helpers_test.go:261: (dbg) Run:  kubectl --context ha-525790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr: (4.086942547s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-525790 -n ha-525790
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-525790 logs -n 25: (1.381803988s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790:/home/docker/cp-test_ha-525790-m03_ha-525790.txt                       |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790 sudo cat                                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790.txt                                 |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m02:/home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m04 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp testdata/cp-test.txt                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790:/home/docker/cp-test_ha-525790-m04_ha-525790.txt                       |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790 sudo cat                                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790.txt                                 |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m02:/home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03:/home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m03 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-525790 node stop m02 -v=7                                                     | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-525790 node start m02 -v=7                                                    | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:46:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:46:38.789149  762988 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:46:38.789304  762988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:46:38.789316  762988 out.go:358] Setting ErrFile to fd 2...
	I0920 18:46:38.789323  762988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:46:38.789530  762988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:46:38.790164  762988 out.go:352] Setting JSON to false
	I0920 18:46:38.791213  762988 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8949,"bootTime":1726849050,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:46:38.791325  762988 start.go:139] virtualization: kvm guest
	I0920 18:46:38.794321  762988 out.go:177] * [ha-525790] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:46:38.795880  762988 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:46:38.795921  762988 notify.go:220] Checking for updates...
	I0920 18:46:38.798815  762988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:46:38.800212  762988 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:46:38.801657  762988 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:46:38.802936  762988 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:46:38.804312  762988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:46:38.805745  762988 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:46:38.840721  762988 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 18:46:38.841998  762988 start.go:297] selected driver: kvm2
	I0920 18:46:38.842017  762988 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:46:38.842030  762988 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:46:38.842791  762988 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:46:38.842923  762988 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:46:38.857953  762988 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:46:38.858007  762988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:46:38.858244  762988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:46:38.858274  762988 cni.go:84] Creating CNI manager for ""
	I0920 18:46:38.858324  762988 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 18:46:38.858332  762988 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:46:38.858385  762988 start.go:340] cluster config:
	{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0920 18:46:38.858482  762988 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:46:38.861017  762988 out.go:177] * Starting "ha-525790" primary control-plane node in "ha-525790" cluster
	I0920 18:46:38.862480  762988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:46:38.862534  762988 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:46:38.862548  762988 cache.go:56] Caching tarball of preloaded images
	I0920 18:46:38.862674  762988 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:46:38.862687  762988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:46:38.863061  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:46:38.863096  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json: {Name:mk5c775b0f6d6c9cf399952e81d482461c2f3276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:46:38.863265  762988 start.go:360] acquireMachinesLock for ha-525790: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:46:38.863304  762988 start.go:364] duration metric: took 22.887µs to acquireMachinesLock for "ha-525790"
	I0920 18:46:38.863326  762988 start.go:93] Provisioning new machine with config: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:46:38.863386  762988 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 18:46:38.865997  762988 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:46:38.866141  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:46:38.866188  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:46:38.881131  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0920 18:46:38.881605  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:46:38.882180  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:46:38.882202  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:46:38.882573  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:46:38.882762  762988 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:46:38.882960  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:46:38.883106  762988 start.go:159] libmachine.API.Create for "ha-525790" (driver="kvm2")
	I0920 18:46:38.883131  762988 client.go:168] LocalClient.Create starting
	I0920 18:46:38.883164  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:46:38.883195  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:46:38.883209  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:46:38.883266  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:46:38.883283  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:46:38.883293  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:46:38.883309  762988 main.go:141] libmachine: Running pre-create checks...
	I0920 18:46:38.883317  762988 main.go:141] libmachine: (ha-525790) Calling .PreCreateCheck
	I0920 18:46:38.883674  762988 main.go:141] libmachine: (ha-525790) Calling .GetConfigRaw
	I0920 18:46:38.884046  762988 main.go:141] libmachine: Creating machine...
	I0920 18:46:38.884058  762988 main.go:141] libmachine: (ha-525790) Calling .Create
	I0920 18:46:38.884186  762988 main.go:141] libmachine: (ha-525790) Creating KVM machine...
	I0920 18:46:38.885388  762988 main.go:141] libmachine: (ha-525790) DBG | found existing default KVM network
	I0920 18:46:38.886155  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:38.886012  763011 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bb0}
	I0920 18:46:38.886212  762988 main.go:141] libmachine: (ha-525790) DBG | created network xml: 
	I0920 18:46:38.886231  762988 main.go:141] libmachine: (ha-525790) DBG | <network>
	I0920 18:46:38.886238  762988 main.go:141] libmachine: (ha-525790) DBG |   <name>mk-ha-525790</name>
	I0920 18:46:38.886242  762988 main.go:141] libmachine: (ha-525790) DBG |   <dns enable='no'/>
	I0920 18:46:38.886247  762988 main.go:141] libmachine: (ha-525790) DBG |   
	I0920 18:46:38.886265  762988 main.go:141] libmachine: (ha-525790) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 18:46:38.886272  762988 main.go:141] libmachine: (ha-525790) DBG |     <dhcp>
	I0920 18:46:38.886279  762988 main.go:141] libmachine: (ha-525790) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 18:46:38.886301  762988 main.go:141] libmachine: (ha-525790) DBG |     </dhcp>
	I0920 18:46:38.886355  762988 main.go:141] libmachine: (ha-525790) DBG |   </ip>
	I0920 18:46:38.886369  762988 main.go:141] libmachine: (ha-525790) DBG |   
	I0920 18:46:38.886374  762988 main.go:141] libmachine: (ha-525790) DBG | </network>
	I0920 18:46:38.886382  762988 main.go:141] libmachine: (ha-525790) DBG | 
	I0920 18:46:38.891425  762988 main.go:141] libmachine: (ha-525790) DBG | trying to create private KVM network mk-ha-525790 192.168.39.0/24...
	I0920 18:46:38.955444  762988 main.go:141] libmachine: (ha-525790) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790 ...
	I0920 18:46:38.955497  762988 main.go:141] libmachine: (ha-525790) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:46:38.955509  762988 main.go:141] libmachine: (ha-525790) DBG | private KVM network mk-ha-525790 192.168.39.0/24 created
	I0920 18:46:38.955527  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:38.955388  763011 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:46:38.955546  762988 main.go:141] libmachine: (ha-525790) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:46:39.243592  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:39.243485  763011 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa...
	I0920 18:46:39.608366  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:39.608221  763011 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/ha-525790.rawdisk...
	I0920 18:46:39.608404  762988 main.go:141] libmachine: (ha-525790) DBG | Writing magic tar header
	I0920 18:46:39.608446  762988 main.go:141] libmachine: (ha-525790) DBG | Writing SSH key tar header
	I0920 18:46:39.608516  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:39.608475  763011 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790 ...
	I0920 18:46:39.608599  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790
	I0920 18:46:39.608627  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790 (perms=drwx------)
	I0920 18:46:39.608656  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:46:39.608670  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:46:39.608683  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:46:39.608695  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:46:39.608706  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:46:39.608718  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:46:39.608730  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:46:39.608740  762988 main.go:141] libmachine: (ha-525790) DBG | Checking permissions on dir: /home
	I0920 18:46:39.608750  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:46:39.608763  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:46:39.608777  762988 main.go:141] libmachine: (ha-525790) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:46:39.608788  762988 main.go:141] libmachine: (ha-525790) Creating domain...
	I0920 18:46:39.608796  762988 main.go:141] libmachine: (ha-525790) DBG | Skipping /home - not owner
	I0920 18:46:39.609887  762988 main.go:141] libmachine: (ha-525790) define libvirt domain using xml: 
	I0920 18:46:39.609929  762988 main.go:141] libmachine: (ha-525790) <domain type='kvm'>
	I0920 18:46:39.609936  762988 main.go:141] libmachine: (ha-525790)   <name>ha-525790</name>
	I0920 18:46:39.609941  762988 main.go:141] libmachine: (ha-525790)   <memory unit='MiB'>2200</memory>
	I0920 18:46:39.609946  762988 main.go:141] libmachine: (ha-525790)   <vcpu>2</vcpu>
	I0920 18:46:39.609950  762988 main.go:141] libmachine: (ha-525790)   <features>
	I0920 18:46:39.609954  762988 main.go:141] libmachine: (ha-525790)     <acpi/>
	I0920 18:46:39.609958  762988 main.go:141] libmachine: (ha-525790)     <apic/>
	I0920 18:46:39.609963  762988 main.go:141] libmachine: (ha-525790)     <pae/>
	I0920 18:46:39.609972  762988 main.go:141] libmachine: (ha-525790)     
	I0920 18:46:39.609977  762988 main.go:141] libmachine: (ha-525790)   </features>
	I0920 18:46:39.609981  762988 main.go:141] libmachine: (ha-525790)   <cpu mode='host-passthrough'>
	I0920 18:46:39.609988  762988 main.go:141] libmachine: (ha-525790)   
	I0920 18:46:39.609991  762988 main.go:141] libmachine: (ha-525790)   </cpu>
	I0920 18:46:39.609996  762988 main.go:141] libmachine: (ha-525790)   <os>
	I0920 18:46:39.610000  762988 main.go:141] libmachine: (ha-525790)     <type>hvm</type>
	I0920 18:46:39.610004  762988 main.go:141] libmachine: (ha-525790)     <boot dev='cdrom'/>
	I0920 18:46:39.610012  762988 main.go:141] libmachine: (ha-525790)     <boot dev='hd'/>
	I0920 18:46:39.610034  762988 main.go:141] libmachine: (ha-525790)     <bootmenu enable='no'/>
	I0920 18:46:39.610055  762988 main.go:141] libmachine: (ha-525790)   </os>
	I0920 18:46:39.610063  762988 main.go:141] libmachine: (ha-525790)   <devices>
	I0920 18:46:39.610071  762988 main.go:141] libmachine: (ha-525790)     <disk type='file' device='cdrom'>
	I0920 18:46:39.610087  762988 main.go:141] libmachine: (ha-525790)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/boot2docker.iso'/>
	I0920 18:46:39.610097  762988 main.go:141] libmachine: (ha-525790)       <target dev='hdc' bus='scsi'/>
	I0920 18:46:39.610105  762988 main.go:141] libmachine: (ha-525790)       <readonly/>
	I0920 18:46:39.610111  762988 main.go:141] libmachine: (ha-525790)     </disk>
	I0920 18:46:39.610117  762988 main.go:141] libmachine: (ha-525790)     <disk type='file' device='disk'>
	I0920 18:46:39.610124  762988 main.go:141] libmachine: (ha-525790)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:46:39.610165  762988 main.go:141] libmachine: (ha-525790)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/ha-525790.rawdisk'/>
	I0920 18:46:39.610187  762988 main.go:141] libmachine: (ha-525790)       <target dev='hda' bus='virtio'/>
	I0920 18:46:39.610197  762988 main.go:141] libmachine: (ha-525790)     </disk>
	I0920 18:46:39.610210  762988 main.go:141] libmachine: (ha-525790)     <interface type='network'>
	I0920 18:46:39.610222  762988 main.go:141] libmachine: (ha-525790)       <source network='mk-ha-525790'/>
	I0920 18:46:39.610232  762988 main.go:141] libmachine: (ha-525790)       <model type='virtio'/>
	I0920 18:46:39.610240  762988 main.go:141] libmachine: (ha-525790)     </interface>
	I0920 18:46:39.610250  762988 main.go:141] libmachine: (ha-525790)     <interface type='network'>
	I0920 18:46:39.610258  762988 main.go:141] libmachine: (ha-525790)       <source network='default'/>
	I0920 18:46:39.610275  762988 main.go:141] libmachine: (ha-525790)       <model type='virtio'/>
	I0920 18:46:39.610283  762988 main.go:141] libmachine: (ha-525790)     </interface>
	I0920 18:46:39.610288  762988 main.go:141] libmachine: (ha-525790)     <serial type='pty'>
	I0920 18:46:39.610292  762988 main.go:141] libmachine: (ha-525790)       <target port='0'/>
	I0920 18:46:39.610299  762988 main.go:141] libmachine: (ha-525790)     </serial>
	I0920 18:46:39.610308  762988 main.go:141] libmachine: (ha-525790)     <console type='pty'>
	I0920 18:46:39.610326  762988 main.go:141] libmachine: (ha-525790)       <target type='serial' port='0'/>
	I0920 18:46:39.610338  762988 main.go:141] libmachine: (ha-525790)     </console>
	I0920 18:46:39.610349  762988 main.go:141] libmachine: (ha-525790)     <rng model='virtio'>
	I0920 18:46:39.610362  762988 main.go:141] libmachine: (ha-525790)       <backend model='random'>/dev/random</backend>
	I0920 18:46:39.610371  762988 main.go:141] libmachine: (ha-525790)     </rng>
	I0920 18:46:39.610375  762988 main.go:141] libmachine: (ha-525790)     
	I0920 18:46:39.610381  762988 main.go:141] libmachine: (ha-525790)     
	I0920 18:46:39.610387  762988 main.go:141] libmachine: (ha-525790)   </devices>
	I0920 18:46:39.610397  762988 main.go:141] libmachine: (ha-525790) </domain>
	I0920 18:46:39.610405  762988 main.go:141] libmachine: (ha-525790) 
	I0920 18:46:39.614486  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:50:2a:69 in network default
	I0920 18:46:39.615032  762988 main.go:141] libmachine: (ha-525790) Ensuring networks are active...
	I0920 18:46:39.615051  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:39.615715  762988 main.go:141] libmachine: (ha-525790) Ensuring network default is active
	I0920 18:46:39.616018  762988 main.go:141] libmachine: (ha-525790) Ensuring network mk-ha-525790 is active
	I0920 18:46:39.616415  762988 main.go:141] libmachine: (ha-525790) Getting domain xml...
	I0920 18:46:39.617025  762988 main.go:141] libmachine: (ha-525790) Creating domain...
	I0920 18:46:40.795742  762988 main.go:141] libmachine: (ha-525790) Waiting to get IP...
	I0920 18:46:40.796420  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:40.796852  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:40.796878  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:40.796826  763011 retry.go:31] will retry after 263.82587ms: waiting for machine to come up
	I0920 18:46:41.062273  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:41.062647  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:41.062678  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:41.062592  763011 retry.go:31] will retry after 386.712635ms: waiting for machine to come up
	I0920 18:46:41.451226  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:41.451632  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:41.451661  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:41.451579  763011 retry.go:31] will retry after 342.693912ms: waiting for machine to come up
	I0920 18:46:41.796191  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:41.796691  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:41.796715  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:41.796648  763011 retry.go:31] will retry after 576.710058ms: waiting for machine to come up
	I0920 18:46:42.375515  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:42.376036  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:42.376061  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:42.375999  763011 retry.go:31] will retry after 663.670245ms: waiting for machine to come up
	I0920 18:46:43.040735  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:43.041215  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:43.041246  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:43.041140  763011 retry.go:31] will retry after 597.358521ms: waiting for machine to come up
	I0920 18:46:43.639686  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:43.640007  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:43.640036  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:43.639963  763011 retry.go:31] will retry after 1.058911175s: waiting for machine to come up
	I0920 18:46:44.700947  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:44.701385  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:44.701413  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:44.701343  763011 retry.go:31] will retry after 1.038799294s: waiting for machine to come up
	I0920 18:46:45.741663  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:45.742102  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:45.742126  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:45.742045  763011 retry.go:31] will retry after 1.383433424s: waiting for machine to come up
	I0920 18:46:47.127537  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:47.128058  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:47.128078  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:47.127983  763011 retry.go:31] will retry after 1.617569351s: waiting for machine to come up
	I0920 18:46:48.747698  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:48.748209  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:48.748240  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:48.748143  763011 retry.go:31] will retry after 2.371010271s: waiting for machine to come up
	I0920 18:46:51.120964  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:51.121427  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:51.121458  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:51.121379  763011 retry.go:31] will retry after 2.200163157s: waiting for machine to come up
	I0920 18:46:53.322674  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:53.322965  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:53.322986  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:53.322923  763011 retry.go:31] will retry after 3.176543377s: waiting for machine to come up
	I0920 18:46:56.502595  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:46:56.502881  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find current IP address of domain ha-525790 in network mk-ha-525790
	I0920 18:46:56.502907  762988 main.go:141] libmachine: (ha-525790) DBG | I0920 18:46:56.502808  763011 retry.go:31] will retry after 5.194371334s: waiting for machine to come up
	I0920 18:47:01.701005  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.701389  762988 main.go:141] libmachine: (ha-525790) Found IP for machine: 192.168.39.149
	I0920 18:47:01.701409  762988 main.go:141] libmachine: (ha-525790) Reserving static IP address...
	I0920 18:47:01.701417  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has current primary IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.701762  762988 main.go:141] libmachine: (ha-525790) DBG | unable to find host DHCP lease matching {name: "ha-525790", mac: "52:54:00:93:48:3a", ip: "192.168.39.149"} in network mk-ha-525790
	I0920 18:47:01.773329  762988 main.go:141] libmachine: (ha-525790) DBG | Getting to WaitForSSH function...
	I0920 18:47:01.773358  762988 main.go:141] libmachine: (ha-525790) Reserved static IP address: 192.168.39.149
	I0920 18:47:01.773388  762988 main.go:141] libmachine: (ha-525790) Waiting for SSH to be available...
	I0920 18:47:01.776048  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.776426  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:01.776463  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.776622  762988 main.go:141] libmachine: (ha-525790) DBG | Using SSH client type: external
	I0920 18:47:01.776646  762988 main.go:141] libmachine: (ha-525790) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa (-rw-------)
	I0920 18:47:01.776683  762988 main.go:141] libmachine: (ha-525790) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:47:01.776700  762988 main.go:141] libmachine: (ha-525790) DBG | About to run SSH command:
	I0920 18:47:01.776715  762988 main.go:141] libmachine: (ha-525790) DBG | exit 0
	I0920 18:47:01.898967  762988 main.go:141] libmachine: (ha-525790) DBG | SSH cmd err, output: <nil>: 
	I0920 18:47:01.899221  762988 main.go:141] libmachine: (ha-525790) KVM machine creation complete!
	I0920 18:47:01.899544  762988 main.go:141] libmachine: (ha-525790) Calling .GetConfigRaw
	I0920 18:47:01.900277  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:01.900493  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:01.900650  762988 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:47:01.900666  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:01.901918  762988 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:47:01.901931  762988 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:47:01.901936  762988 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:47:01.901941  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:01.904499  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.904882  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:01.904911  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:01.905023  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:01.905203  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:01.905333  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:01.905455  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:01.905648  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:01.905950  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:01.905967  762988 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:47:02.002303  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:47:02.002325  762988 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:47:02.002332  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.005206  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.005502  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.005524  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.005703  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.005932  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.006115  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.006265  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.006494  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.006725  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.006738  762988 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:47:02.103696  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:47:02.103818  762988 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:47:02.103834  762988 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:47:02.103845  762988 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:47:02.104117  762988 buildroot.go:166] provisioning hostname "ha-525790"
	I0920 18:47:02.104147  762988 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:47:02.104362  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.107026  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.107445  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.107466  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.107725  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.107909  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.108050  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.108218  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.108380  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.108558  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.108576  762988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790 && echo "ha-525790" | sudo tee /etc/hostname
	I0920 18:47:02.221193  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:47:02.221225  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.224188  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.224526  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.224548  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.224771  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.224973  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.225135  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.225274  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.225455  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.225692  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.225716  762988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:47:02.333039  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:47:02.333077  762988 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:47:02.333139  762988 buildroot.go:174] setting up certificates
	I0920 18:47:02.333156  762988 provision.go:84] configureAuth start
	I0920 18:47:02.333175  762988 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:47:02.333477  762988 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:47:02.336179  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.336437  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.336466  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.336621  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.338903  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.339190  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.339228  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.339347  762988 provision.go:143] copyHostCerts
	I0920 18:47:02.339388  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:47:02.339428  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:47:02.339443  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:47:02.339511  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:47:02.339645  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:47:02.339667  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:47:02.339674  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:47:02.339705  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:47:02.339762  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:47:02.339781  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:47:02.339788  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:47:02.339812  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:47:02.339874  762988 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790 san=[127.0.0.1 192.168.39.149 ha-525790 localhost minikube]
	I0920 18:47:02.453692  762988 provision.go:177] copyRemoteCerts
	I0920 18:47:02.453777  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:47:02.453804  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.456622  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.456981  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.457012  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.457155  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.457322  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.457514  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.457694  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:02.537102  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:47:02.537192  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:47:02.561583  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:47:02.561653  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0920 18:47:02.584887  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:47:02.584963  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:47:02.607882  762988 provision.go:87] duration metric: took 274.708599ms to configureAuth
	I0920 18:47:02.607913  762988 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:47:02.608135  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:02.608263  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.610585  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.610941  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.610966  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.611170  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.611364  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.611566  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.611733  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.611901  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.612097  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.612128  762988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:47:02.825619  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:47:02.825649  762988 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:47:02.825670  762988 main.go:141] libmachine: (ha-525790) Calling .GetURL
	I0920 18:47:02.826777  762988 main.go:141] libmachine: (ha-525790) DBG | Using libvirt version 6000000
	I0920 18:47:02.828685  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.829016  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.829041  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.829240  762988 main.go:141] libmachine: Docker is up and running!
	I0920 18:47:02.829256  762988 main.go:141] libmachine: Reticulating splines...
	I0920 18:47:02.829269  762988 client.go:171] duration metric: took 23.94612541s to LocalClient.Create
	I0920 18:47:02.829292  762988 start.go:167] duration metric: took 23.946187981s to libmachine.API.Create "ha-525790"
	I0920 18:47:02.829302  762988 start.go:293] postStartSetup for "ha-525790" (driver="kvm2")
	I0920 18:47:02.829311  762988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:47:02.829329  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:02.829550  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:47:02.829607  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.831515  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.831740  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.831770  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.831871  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.832029  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.832155  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.832317  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:02.912925  762988 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:47:02.917265  762988 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:47:02.917289  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:47:02.917365  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:47:02.917439  762988 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:47:02.917449  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:47:02.917538  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:47:02.926976  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:47:02.950998  762988 start.go:296] duration metric: took 121.680006ms for postStartSetup
	I0920 18:47:02.951052  762988 main.go:141] libmachine: (ha-525790) Calling .GetConfigRaw
	I0920 18:47:02.951761  762988 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:47:02.954370  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.954692  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.954720  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.954955  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:47:02.955155  762988 start.go:128] duration metric: took 24.09175682s to createHost
	I0920 18:47:02.955178  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:02.957364  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.957683  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:02.957707  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:02.957847  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:02.958049  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.958195  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:02.958370  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:02.958531  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:02.958721  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:47:02.958745  762988 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:47:03.055624  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858023.014434190
	
	I0920 18:47:03.055646  762988 fix.go:216] guest clock: 1726858023.014434190
	I0920 18:47:03.055653  762988 fix.go:229] Guest: 2024-09-20 18:47:03.01443419 +0000 UTC Remote: 2024-09-20 18:47:02.955165997 +0000 UTC m=+24.204227210 (delta=59.268193ms)
	I0920 18:47:03.055673  762988 fix.go:200] guest clock delta is within tolerance: 59.268193ms
	I0920 18:47:03.055678  762988 start.go:83] releasing machines lock for "ha-525790", held for 24.192365497s
	I0920 18:47:03.055696  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:03.056004  762988 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:47:03.058619  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.058967  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:03.059002  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.059176  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:03.059645  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:03.059786  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:03.059913  762988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:47:03.059955  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:03.060006  762988 ssh_runner.go:195] Run: cat /version.json
	I0920 18:47:03.060036  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:03.062498  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.062744  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.062833  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:03.062884  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.063020  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:03.063078  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:03.063109  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:03.063168  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:03.063236  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:03.063307  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:03.063405  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:03.063423  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:03.063542  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:03.063665  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:03.136335  762988 ssh_runner.go:195] Run: systemctl --version
	I0920 18:47:03.170125  762988 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:47:03.331364  762988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:47:03.337153  762988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:47:03.337233  762988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:47:03.353297  762988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:47:03.353324  762988 start.go:495] detecting cgroup driver to use...
	I0920 18:47:03.353385  762988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:47:03.369816  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:47:03.383774  762988 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:47:03.383838  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:47:03.397487  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:47:03.411243  762988 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:47:03.523455  762988 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:47:03.671823  762988 docker.go:233] disabling docker service ...
	I0920 18:47:03.671918  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:47:03.687139  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:47:03.700569  762988 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:47:03.840971  762988 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:47:03.962385  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:47:03.976750  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:47:03.995774  762988 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:47:03.995835  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.007019  762988 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:47:04.007124  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.018001  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.028509  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.039860  762988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:47:04.050769  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.061191  762988 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.077692  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:04.088041  762988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:47:04.097754  762988 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:47:04.097807  762988 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:47:04.110739  762988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:47:04.120636  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:47:04.245299  762988 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:47:04.341170  762988 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:47:04.341258  762988 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:47:04.345975  762988 start.go:563] Will wait 60s for crictl version
	I0920 18:47:04.346047  762988 ssh_runner.go:195] Run: which crictl
	I0920 18:47:04.349925  762988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:47:04.390230  762988 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:47:04.390341  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:47:04.418445  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:47:04.447740  762988 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:47:04.448969  762988 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:47:04.451547  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:04.451921  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:04.451950  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:04.452148  762988 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:47:04.456198  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:47:04.470013  762988 kubeadm.go:883] updating cluster {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:47:04.470186  762988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:47:04.470265  762988 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:47:04.502535  762988 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 18:47:04.502609  762988 ssh_runner.go:195] Run: which lz4
	I0920 18:47:04.506581  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0920 18:47:04.506673  762988 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 18:47:04.510814  762988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 18:47:04.510861  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 18:47:05.839638  762988 crio.go:462] duration metric: took 1.33298536s to copy over tarball
	I0920 18:47:05.839723  762988 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 18:47:07.786766  762988 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.947011448s)
	I0920 18:47:07.786795  762988 crio.go:469] duration metric: took 1.947128446s to extract the tarball
	I0920 18:47:07.786805  762988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 18:47:07.822913  762988 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:47:07.866552  762988 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:47:07.866583  762988 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:47:07.866592  762988 kubeadm.go:934] updating node { 192.168.39.149 8443 v1.31.1 crio true true} ...
	I0920 18:47:07.866704  762988 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:47:07.866781  762988 ssh_runner.go:195] Run: crio config
	I0920 18:47:07.918540  762988 cni.go:84] Creating CNI manager for ""
	I0920 18:47:07.918563  762988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 18:47:07.918573  762988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:47:07.918597  762988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-525790 NodeName:ha-525790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:47:07.918730  762988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-525790"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:47:07.918753  762988 kube-vip.go:115] generating kube-vip config ...
	I0920 18:47:07.918798  762988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:47:07.936288  762988 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:47:07.936429  762988 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:47:07.936497  762988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:47:07.945867  762988 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:47:07.945940  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 18:47:07.955191  762988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 18:47:07.971064  762988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:47:07.986880  762988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:47:08.002662  762988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0920 18:47:08.019579  762988 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:47:08.023552  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:47:08.035218  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:47:08.170218  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:47:08.187527  762988 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.149
	I0920 18:47:08.187547  762988 certs.go:194] generating shared ca certs ...
	I0920 18:47:08.187568  762988 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.187793  762988 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:47:08.187883  762988 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:47:08.187899  762988 certs.go:256] generating profile certs ...
	I0920 18:47:08.187973  762988 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:47:08.187993  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt with IP's: []
	I0920 18:47:08.272186  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt ...
	I0920 18:47:08.272216  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt: {Name:mk7bd0f4b5267ef296fffaf22c63ade5f9317aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.272387  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key ...
	I0920 18:47:08.272398  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key: {Name:mk8397cc62a5b5fd0095d7257df95debaa0a3c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.272479  762988 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.39888826
	I0920 18:47:08.272493  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.39888826 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.254]
	I0920 18:47:08.448019  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.39888826 ...
	I0920 18:47:08.448049  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.39888826: {Name:mk46ff6887950fec6d616a29dc6bce205118977d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.448240  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.39888826 ...
	I0920 18:47:08.448262  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.39888826: {Name:mk9b06f9440d087fb58cd5f31657e72732704a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.448360  762988 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.39888826 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:47:08.448487  762988 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.39888826 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:47:08.448573  762988 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:47:08.448592  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt with IP's: []
	I0920 18:47:08.547781  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt ...
	I0920 18:47:08.547811  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt: {Name:mk5f440c35d9494faae93b7f24e431b15c93d038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.547991  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key ...
	I0920 18:47:08.548027  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key: {Name:mk1af5a674ecd36547ebff165e719d66a8eaf2a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:08.548154  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:47:08.548179  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:47:08.548198  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:47:08.548217  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:47:08.548234  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:47:08.548251  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:47:08.548270  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:47:08.548288  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:47:08.548368  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:47:08.548419  762988 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:47:08.548433  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:47:08.548468  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:47:08.548498  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:47:08.548526  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:47:08.548582  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:47:08.548616  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:47:08.548636  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:47:08.548655  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:08.549274  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:47:08.575606  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:47:08.599030  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:47:08.622271  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:47:08.645192  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 18:47:08.668189  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:47:08.691174  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:47:08.714332  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:47:08.737751  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:47:08.760383  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:47:08.783502  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:47:08.806863  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:47:08.822981  762988 ssh_runner.go:195] Run: openssl version
	I0920 18:47:08.828850  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:47:08.839624  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:47:08.844261  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:47:08.844324  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:47:08.850299  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:47:08.860928  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:47:08.871606  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:47:08.876264  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:47:08.876328  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:47:08.882105  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:47:08.892622  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:47:08.903139  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:08.907653  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:08.907717  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:08.913362  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:47:08.923853  762988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:47:08.927915  762988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:47:08.927964  762988 kubeadm.go:392] StartCluster: {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:47:08.928033  762988 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:47:08.928074  762988 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:47:08.975658  762988 cri.go:89] found id: ""
	I0920 18:47:08.975731  762988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:47:08.987853  762988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:47:09.001997  762988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:47:09.015239  762988 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:47:09.015263  762988 kubeadm.go:157] found existing configuration files:
	
	I0920 18:47:09.015328  762988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:47:09.024322  762988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:47:09.024391  762988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:47:09.033789  762988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:47:09.042729  762988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:47:09.042806  762988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:47:09.052389  762988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:47:09.061397  762988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:47:09.061452  762988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:47:09.070628  762988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:47:09.079481  762988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:47:09.079574  762988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:47:09.088812  762988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 18:47:09.197025  762988 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:47:09.197195  762988 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:47:09.302732  762988 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:47:09.302875  762988 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:47:09.303013  762988 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:47:09.313100  762988 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:47:09.315042  762988 out.go:235]   - Generating certificates and keys ...
	I0920 18:47:09.315126  762988 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:47:09.315194  762988 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:47:09.561066  762988 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:47:09.701075  762988 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:47:09.963251  762988 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:47:10.218874  762988 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:47:10.374815  762988 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:47:10.375019  762988 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-525790 localhost] and IPs [192.168.39.149 127.0.0.1 ::1]
	I0920 18:47:10.536783  762988 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:47:10.536945  762988 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-525790 localhost] and IPs [192.168.39.149 127.0.0.1 ::1]
	I0920 18:47:10.653048  762988 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:47:10.817540  762988 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:47:11.052072  762988 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:47:11.052166  762988 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:47:11.275604  762988 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:47:11.340320  762988 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:47:11.606513  762988 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:47:11.722778  762988 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:47:11.939356  762988 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:47:11.939850  762988 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:47:11.942972  762988 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:47:11.945229  762988 out.go:235]   - Booting up control plane ...
	I0920 18:47:11.945356  762988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:47:11.945485  762988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:47:11.945574  762988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:47:11.961277  762988 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:47:11.967235  762988 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:47:11.967294  762988 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:47:12.103452  762988 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:47:12.103652  762988 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:47:12.605055  762988 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.510324ms
	I0920 18:47:12.605178  762988 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:47:18.584157  762988 kubeadm.go:310] [api-check] The API server is healthy after 5.978671976s
	I0920 18:47:18.596695  762988 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:47:19.113972  762988 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:47:19.144976  762988 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:47:19.145190  762988 kubeadm.go:310] [mark-control-plane] Marking the node ha-525790 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:47:19.157610  762988 kubeadm.go:310] [bootstrap-token] Using token: qd32pn.8pqkvbtlqp80l6sb
	I0920 18:47:19.159113  762988 out.go:235]   - Configuring RBAC rules ...
	I0920 18:47:19.159238  762988 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:47:19.164190  762988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:47:19.177203  762988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:47:19.185189  762988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:47:19.189876  762988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:47:19.193529  762988 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:47:19.311685  762988 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:47:19.754352  762988 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:47:20.310973  762988 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:47:20.311943  762988 kubeadm.go:310] 
	I0920 18:47:20.312030  762988 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:47:20.312039  762988 kubeadm.go:310] 
	I0920 18:47:20.312140  762988 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:47:20.312149  762988 kubeadm.go:310] 
	I0920 18:47:20.312178  762988 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:47:20.312290  762988 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:47:20.312369  762988 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:47:20.312380  762988 kubeadm.go:310] 
	I0920 18:47:20.312430  762988 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:47:20.312442  762988 kubeadm.go:310] 
	I0920 18:47:20.312481  762988 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:47:20.312487  762988 kubeadm.go:310] 
	I0920 18:47:20.312536  762988 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:47:20.312615  762988 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:47:20.312715  762988 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:47:20.312735  762988 kubeadm.go:310] 
	I0920 18:47:20.312856  762988 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:47:20.312961  762988 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:47:20.312973  762988 kubeadm.go:310] 
	I0920 18:47:20.313079  762988 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qd32pn.8pqkvbtlqp80l6sb \
	I0920 18:47:20.313228  762988 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d \
	I0920 18:47:20.313262  762988 kubeadm.go:310] 	--control-plane 
	I0920 18:47:20.313271  762988 kubeadm.go:310] 
	I0920 18:47:20.313383  762988 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:47:20.313397  762988 kubeadm.go:310] 
	I0920 18:47:20.313513  762988 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qd32pn.8pqkvbtlqp80l6sb \
	I0920 18:47:20.313639  762988 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d 
	I0920 18:47:20.314670  762988 kubeadm.go:310] W0920 18:47:09.152542     827 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:47:20.315023  762988 kubeadm.go:310] W0920 18:47:09.153465     827 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:47:20.315172  762988 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:47:20.315210  762988 cni.go:84] Creating CNI manager for ""
	I0920 18:47:20.315225  762988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 18:47:20.317188  762988 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 18:47:20.318757  762988 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 18:47:20.324392  762988 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 18:47:20.324411  762988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 18:47:20.347801  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 18:47:20.735995  762988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:47:20.736093  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:20.736105  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-525790 minikube.k8s.io/updated_at=2024_09_20T18_47_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=ha-525790 minikube.k8s.io/primary=true
	I0920 18:47:20.761909  762988 ops.go:34] apiserver oom_adj: -16
	I0920 18:47:20.876678  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:21.377092  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:21.876896  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:22.377010  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:22.877069  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:23.377474  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:23.877640  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:24.377768  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:47:24.504008  762988 kubeadm.go:1113] duration metric: took 3.76800228s to wait for elevateKubeSystemPrivileges
	I0920 18:47:24.504045  762988 kubeadm.go:394] duration metric: took 15.576084363s to StartCluster
	I0920 18:47:24.504070  762988 settings.go:142] acquiring lock: {Name:mk0bd1e421bf437575c076c52c1ff2f74497a1ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:24.504282  762988 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:47:24.505108  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/kubeconfig: {Name:mk275c54cf52b0ccdc22fcaa39c7b9c31092c648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:24.505342  762988 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:47:24.505366  762988 start.go:241] waiting for startup goroutines ...
	I0920 18:47:24.505366  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:47:24.505382  762988 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:47:24.505468  762988 addons.go:69] Setting storage-provisioner=true in profile "ha-525790"
	I0920 18:47:24.505483  762988 addons.go:69] Setting default-storageclass=true in profile "ha-525790"
	I0920 18:47:24.505492  762988 addons.go:234] Setting addon storage-provisioner=true in "ha-525790"
	I0920 18:47:24.505509  762988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-525790"
	I0920 18:47:24.505524  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:47:24.505571  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:24.505974  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.506023  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.506141  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.506249  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.522502  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41623
	I0920 18:47:24.522534  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38073
	I0920 18:47:24.522991  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.523040  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.523523  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.523546  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.523666  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.523684  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.523961  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.524077  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.524239  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:24.524629  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.524696  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.526413  762988 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:47:24.526810  762988 kapi.go:59] client config for ha-525790: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key", CAFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 18:47:24.527471  762988 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 18:47:24.527819  762988 addons.go:234] Setting addon default-storageclass=true in "ha-525790"
	I0920 18:47:24.527875  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:47:24.528265  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.528313  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.542871  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0920 18:47:24.543236  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0920 18:47:24.543494  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.543587  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.544071  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.544093  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.544229  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.544255  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.544432  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.544641  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.544640  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:24.545205  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:24.545253  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:24.546391  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:24.548710  762988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:47:24.550144  762988 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:47:24.550165  762988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:47:24.550186  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:24.553367  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:24.553828  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:24.553854  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:24.553998  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:24.554216  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:24.554440  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:24.554622  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:24.561549  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37321
	I0920 18:47:24.561966  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:24.562494  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:24.562519  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:24.562876  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:24.563072  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:24.564587  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:24.564814  762988 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:47:24.564831  762988 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:47:24.564849  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:24.567687  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:24.568171  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:24.568193  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:24.568319  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:24.568510  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:24.568703  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:24.568857  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:24.656392  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:47:24.815217  762988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:47:24.828379  762988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:47:25.253619  762988 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 18:47:25.464741  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.464767  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.464846  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.464869  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.465054  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.465071  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.465081  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.465089  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.465214  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.465241  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.465251  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.465258  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.465320  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.465336  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.465344  762988 main.go:141] libmachine: (ha-525790) DBG | Closing plugin on server side
	I0920 18:47:25.465497  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.465514  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.465592  762988 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 18:47:25.465620  762988 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 18:47:25.465728  762988 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0920 18:47:25.465739  762988 round_trippers.go:469] Request Headers:
	I0920 18:47:25.465759  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:47:25.465768  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:47:25.475780  762988 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0920 18:47:25.476328  762988 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0920 18:47:25.476346  762988 round_trippers.go:469] Request Headers:
	I0920 18:47:25.476353  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:47:25.476356  762988 round_trippers.go:473]     Content-Type: application/json
	I0920 18:47:25.476359  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:47:25.478464  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:47:25.478670  762988 main.go:141] libmachine: Making call to close driver server
	I0920 18:47:25.478686  762988 main.go:141] libmachine: (ha-525790) Calling .Close
	I0920 18:47:25.479015  762988 main.go:141] libmachine: Successfully made call to close driver server
	I0920 18:47:25.479056  762988 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 18:47:25.479019  762988 main.go:141] libmachine: (ha-525790) DBG | Closing plugin on server side
	I0920 18:47:25.480685  762988 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 18:47:25.481832  762988 addons.go:510] duration metric: took 976.454814ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 18:47:25.481877  762988 start.go:246] waiting for cluster config update ...
	I0920 18:47:25.481891  762988 start.go:255] writing updated cluster config ...
	I0920 18:47:25.483450  762988 out.go:201] 
	I0920 18:47:25.484717  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:25.484795  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:47:25.486329  762988 out.go:177] * Starting "ha-525790-m02" control-plane node in "ha-525790" cluster
	I0920 18:47:25.487492  762988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:47:25.487516  762988 cache.go:56] Caching tarball of preloaded images
	I0920 18:47:25.487633  762988 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:47:25.487647  762988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:47:25.487721  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:47:25.487913  762988 start.go:360] acquireMachinesLock for ha-525790-m02: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:47:25.487963  762988 start.go:364] duration metric: took 29.413µs to acquireMachinesLock for "ha-525790-m02"
	I0920 18:47:25.487982  762988 start.go:93] Provisioning new machine with config: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:47:25.488070  762988 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0920 18:47:25.489602  762988 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:47:25.489710  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:25.489745  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:25.504741  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0920 18:47:25.505176  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:25.505735  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:25.505756  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:25.506114  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:25.506304  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetMachineName
	I0920 18:47:25.506440  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:25.506586  762988 start.go:159] libmachine.API.Create for "ha-525790" (driver="kvm2")
	I0920 18:47:25.506620  762988 client.go:168] LocalClient.Create starting
	I0920 18:47:25.506658  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:47:25.506697  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:47:25.506717  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:47:25.506786  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:47:25.506825  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:47:25.506864  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:47:25.506891  762988 main.go:141] libmachine: Running pre-create checks...
	I0920 18:47:25.506903  762988 main.go:141] libmachine: (ha-525790-m02) Calling .PreCreateCheck
	I0920 18:47:25.507083  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetConfigRaw
	I0920 18:47:25.507514  762988 main.go:141] libmachine: Creating machine...
	I0920 18:47:25.507530  762988 main.go:141] libmachine: (ha-525790-m02) Calling .Create
	I0920 18:47:25.507681  762988 main.go:141] libmachine: (ha-525790-m02) Creating KVM machine...
	I0920 18:47:25.508920  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found existing default KVM network
	I0920 18:47:25.509048  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found existing private KVM network mk-ha-525790
	I0920 18:47:25.509185  762988 main.go:141] libmachine: (ha-525790-m02) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02 ...
	I0920 18:47:25.509201  762988 main.go:141] libmachine: (ha-525790-m02) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:47:25.509310  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:25.509191  763373 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:47:25.509384  762988 main.go:141] libmachine: (ha-525790-m02) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:47:25.810758  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:25.810588  763373 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa...
	I0920 18:47:26.052474  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:26.052313  763373 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/ha-525790-m02.rawdisk...
	I0920 18:47:26.052509  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Writing magic tar header
	I0920 18:47:26.052523  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Writing SSH key tar header
	I0920 18:47:26.052535  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:26.052440  763373 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02 ...
	I0920 18:47:26.052629  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02
	I0920 18:47:26.052676  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:47:26.052691  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02 (perms=drwx------)
	I0920 18:47:26.052705  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:47:26.052718  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:47:26.052738  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:47:26.052758  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:47:26.052768  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:47:26.052788  762988 main.go:141] libmachine: (ha-525790-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:47:26.052797  762988 main.go:141] libmachine: (ha-525790-m02) Creating domain...
	I0920 18:47:26.052815  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:47:26.052826  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:47:26.052837  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:47:26.052849  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Checking permissions on dir: /home
	I0920 18:47:26.052861  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Skipping /home - not owner
	I0920 18:47:26.053670  762988 main.go:141] libmachine: (ha-525790-m02) define libvirt domain using xml: 
	I0920 18:47:26.053692  762988 main.go:141] libmachine: (ha-525790-m02) <domain type='kvm'>
	I0920 18:47:26.053711  762988 main.go:141] libmachine: (ha-525790-m02)   <name>ha-525790-m02</name>
	I0920 18:47:26.053719  762988 main.go:141] libmachine: (ha-525790-m02)   <memory unit='MiB'>2200</memory>
	I0920 18:47:26.053731  762988 main.go:141] libmachine: (ha-525790-m02)   <vcpu>2</vcpu>
	I0920 18:47:26.053741  762988 main.go:141] libmachine: (ha-525790-m02)   <features>
	I0920 18:47:26.053752  762988 main.go:141] libmachine: (ha-525790-m02)     <acpi/>
	I0920 18:47:26.053761  762988 main.go:141] libmachine: (ha-525790-m02)     <apic/>
	I0920 18:47:26.053790  762988 main.go:141] libmachine: (ha-525790-m02)     <pae/>
	I0920 18:47:26.053810  762988 main.go:141] libmachine: (ha-525790-m02)     
	I0920 18:47:26.053820  762988 main.go:141] libmachine: (ha-525790-m02)   </features>
	I0920 18:47:26.053828  762988 main.go:141] libmachine: (ha-525790-m02)   <cpu mode='host-passthrough'>
	I0920 18:47:26.053841  762988 main.go:141] libmachine: (ha-525790-m02)   
	I0920 18:47:26.053848  762988 main.go:141] libmachine: (ha-525790-m02)   </cpu>
	I0920 18:47:26.053859  762988 main.go:141] libmachine: (ha-525790-m02)   <os>
	I0920 18:47:26.053883  762988 main.go:141] libmachine: (ha-525790-m02)     <type>hvm</type>
	I0920 18:47:26.053908  762988 main.go:141] libmachine: (ha-525790-m02)     <boot dev='cdrom'/>
	I0920 18:47:26.053933  762988 main.go:141] libmachine: (ha-525790-m02)     <boot dev='hd'/>
	I0920 18:47:26.053946  762988 main.go:141] libmachine: (ha-525790-m02)     <bootmenu enable='no'/>
	I0920 18:47:26.053958  762988 main.go:141] libmachine: (ha-525790-m02)   </os>
	I0920 18:47:26.053975  762988 main.go:141] libmachine: (ha-525790-m02)   <devices>
	I0920 18:47:26.053988  762988 main.go:141] libmachine: (ha-525790-m02)     <disk type='file' device='cdrom'>
	I0920 18:47:26.053999  762988 main.go:141] libmachine: (ha-525790-m02)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/boot2docker.iso'/>
	I0920 18:47:26.054008  762988 main.go:141] libmachine: (ha-525790-m02)       <target dev='hdc' bus='scsi'/>
	I0920 18:47:26.054017  762988 main.go:141] libmachine: (ha-525790-m02)       <readonly/>
	I0920 18:47:26.054026  762988 main.go:141] libmachine: (ha-525790-m02)     </disk>
	I0920 18:47:26.054036  762988 main.go:141] libmachine: (ha-525790-m02)     <disk type='file' device='disk'>
	I0920 18:47:26.054048  762988 main.go:141] libmachine: (ha-525790-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:47:26.054067  762988 main.go:141] libmachine: (ha-525790-m02)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/ha-525790-m02.rawdisk'/>
	I0920 18:47:26.054080  762988 main.go:141] libmachine: (ha-525790-m02)       <target dev='hda' bus='virtio'/>
	I0920 18:47:26.054092  762988 main.go:141] libmachine: (ha-525790-m02)     </disk>
	I0920 18:47:26.054102  762988 main.go:141] libmachine: (ha-525790-m02)     <interface type='network'>
	I0920 18:47:26.054113  762988 main.go:141] libmachine: (ha-525790-m02)       <source network='mk-ha-525790'/>
	I0920 18:47:26.054121  762988 main.go:141] libmachine: (ha-525790-m02)       <model type='virtio'/>
	I0920 18:47:26.054138  762988 main.go:141] libmachine: (ha-525790-m02)     </interface>
	I0920 18:47:26.054148  762988 main.go:141] libmachine: (ha-525790-m02)     <interface type='network'>
	I0920 18:47:26.054159  762988 main.go:141] libmachine: (ha-525790-m02)       <source network='default'/>
	I0920 18:47:26.054170  762988 main.go:141] libmachine: (ha-525790-m02)       <model type='virtio'/>
	I0920 18:47:26.054182  762988 main.go:141] libmachine: (ha-525790-m02)     </interface>
	I0920 18:47:26.054192  762988 main.go:141] libmachine: (ha-525790-m02)     <serial type='pty'>
	I0920 18:47:26.054202  762988 main.go:141] libmachine: (ha-525790-m02)       <target port='0'/>
	I0920 18:47:26.054210  762988 main.go:141] libmachine: (ha-525790-m02)     </serial>
	I0920 18:47:26.054226  762988 main.go:141] libmachine: (ha-525790-m02)     <console type='pty'>
	I0920 18:47:26.054239  762988 main.go:141] libmachine: (ha-525790-m02)       <target type='serial' port='0'/>
	I0920 18:47:26.054250  762988 main.go:141] libmachine: (ha-525790-m02)     </console>
	I0920 18:47:26.054260  762988 main.go:141] libmachine: (ha-525790-m02)     <rng model='virtio'>
	I0920 18:47:26.054269  762988 main.go:141] libmachine: (ha-525790-m02)       <backend model='random'>/dev/random</backend>
	I0920 18:47:26.054275  762988 main.go:141] libmachine: (ha-525790-m02)     </rng>
	I0920 18:47:26.054282  762988 main.go:141] libmachine: (ha-525790-m02)     
	I0920 18:47:26.054290  762988 main.go:141] libmachine: (ha-525790-m02)     
	I0920 18:47:26.054302  762988 main.go:141] libmachine: (ha-525790-m02)   </devices>
	I0920 18:47:26.054314  762988 main.go:141] libmachine: (ha-525790-m02) </domain>
	I0920 18:47:26.054327  762988 main.go:141] libmachine: (ha-525790-m02) 
	I0920 18:47:26.060630  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:c9:44:90 in network default
	I0920 18:47:26.061118  762988 main.go:141] libmachine: (ha-525790-m02) Ensuring networks are active...
	I0920 18:47:26.061136  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:26.061831  762988 main.go:141] libmachine: (ha-525790-m02) Ensuring network default is active
	I0920 18:47:26.062169  762988 main.go:141] libmachine: (ha-525790-m02) Ensuring network mk-ha-525790 is active
	I0920 18:47:26.062475  762988 main.go:141] libmachine: (ha-525790-m02) Getting domain xml...
	I0920 18:47:26.063135  762988 main.go:141] libmachine: (ha-525790-m02) Creating domain...
	I0920 18:47:27.281978  762988 main.go:141] libmachine: (ha-525790-m02) Waiting to get IP...
	I0920 18:47:27.282784  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:27.283239  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:27.283266  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:27.283218  763373 retry.go:31] will retry after 308.177361ms: waiting for machine to come up
	I0920 18:47:27.592590  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:27.593066  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:27.593096  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:27.593029  763373 retry.go:31] will retry after 320.236434ms: waiting for machine to come up
	I0920 18:47:27.914511  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:27.914888  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:27.914914  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:27.914871  763373 retry.go:31] will retry after 467.681075ms: waiting for machine to come up
	I0920 18:47:28.384709  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:28.385145  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:28.385176  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:28.385093  763373 retry.go:31] will retry after 475.809922ms: waiting for machine to come up
	I0920 18:47:28.862677  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:28.863104  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:28.863166  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:28.863088  763373 retry.go:31] will retry after 752.437443ms: waiting for machine to come up
	I0920 18:47:29.616869  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:29.617208  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:29.617236  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:29.617153  763373 retry.go:31] will retry after 885.836184ms: waiting for machine to come up
	I0920 18:47:30.505116  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:30.505517  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:30.505574  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:30.505468  763373 retry.go:31] will retry after 963.771364ms: waiting for machine to come up
	I0920 18:47:31.470533  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:31.470960  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:31.470987  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:31.470922  763373 retry.go:31] will retry after 1.119790188s: waiting for machine to come up
	I0920 18:47:32.592108  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:32.592570  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:32.592610  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:32.592526  763373 retry.go:31] will retry after 1.532725085s: waiting for machine to come up
	I0920 18:47:34.127220  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:34.127626  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:34.127659  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:34.127555  763373 retry.go:31] will retry after 1.862816679s: waiting for machine to come up
	I0920 18:47:35.991806  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:35.992125  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:35.992154  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:35.992071  763373 retry.go:31] will retry after 2.15065243s: waiting for machine to come up
	I0920 18:47:38.145444  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:38.145875  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:38.145907  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:38.145806  763373 retry.go:31] will retry after 3.304630599s: waiting for machine to come up
	I0920 18:47:41.451734  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:41.452111  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:41.452140  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:41.452065  763373 retry.go:31] will retry after 3.579286099s: waiting for machine to come up
	I0920 18:47:45.035810  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:45.036306  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find current IP address of domain ha-525790-m02 in network mk-ha-525790
	I0920 18:47:45.036331  762988 main.go:141] libmachine: (ha-525790-m02) DBG | I0920 18:47:45.036255  763373 retry.go:31] will retry after 4.166411475s: waiting for machine to come up
	I0920 18:47:49.204465  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.205113  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has current primary IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.205136  762988 main.go:141] libmachine: (ha-525790-m02) Found IP for machine: 192.168.39.246
	I0920 18:47:49.205146  762988 main.go:141] libmachine: (ha-525790-m02) Reserving static IP address...
	I0920 18:47:49.205644  762988 main.go:141] libmachine: (ha-525790-m02) DBG | unable to find host DHCP lease matching {name: "ha-525790-m02", mac: "52:54:00:da:aa:a2", ip: "192.168.39.246"} in network mk-ha-525790
	I0920 18:47:49.279479  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Getting to WaitForSSH function...
	I0920 18:47:49.279570  762988 main.go:141] libmachine: (ha-525790-m02) Reserved static IP address: 192.168.39.246
	I0920 18:47:49.279586  762988 main.go:141] libmachine: (ha-525790-m02) Waiting for SSH to be available...
	I0920 18:47:49.282091  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.282697  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.282724  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.282939  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Using SSH client type: external
	I0920 18:47:49.282962  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa (-rw-------)
	I0920 18:47:49.283009  762988 main.go:141] libmachine: (ha-525790-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:47:49.283028  762988 main.go:141] libmachine: (ha-525790-m02) DBG | About to run SSH command:
	I0920 18:47:49.283043  762988 main.go:141] libmachine: (ha-525790-m02) DBG | exit 0
	I0920 18:47:49.406686  762988 main.go:141] libmachine: (ha-525790-m02) DBG | SSH cmd err, output: <nil>: 
	I0920 18:47:49.406894  762988 main.go:141] libmachine: (ha-525790-m02) KVM machine creation complete!
	I0920 18:47:49.407253  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetConfigRaw
	I0920 18:47:49.407921  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:49.408101  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:49.408280  762988 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:47:49.408299  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetState
	I0920 18:47:49.409531  762988 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:47:49.409549  762988 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:47:49.409556  762988 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:47:49.409565  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.411929  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.412327  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.412357  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.412422  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:49.412599  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.412798  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.412930  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:49.413134  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:49.413339  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:49.413349  762988 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:47:49.514173  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:47:49.514209  762988 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:47:49.514222  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.516963  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.517420  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.517450  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.517591  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:49.517799  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.517980  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.518113  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:49.518250  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:49.518433  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:49.518443  762988 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:47:49.619473  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:47:49.619576  762988 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:47:49.619587  762988 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:47:49.619599  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetMachineName
	I0920 18:47:49.619832  762988 buildroot.go:166] provisioning hostname "ha-525790-m02"
	I0920 18:47:49.619860  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetMachineName
	I0920 18:47:49.620048  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.622596  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.622960  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.622986  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.623162  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:49.623347  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.623512  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.623614  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:49.623826  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:49.624053  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:49.624072  762988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790-m02 && echo "ha-525790-m02" | sudo tee /etc/hostname
	I0920 18:47:49.741686  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790-m02
	
	I0920 18:47:49.741719  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.744162  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.744537  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.744566  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.744764  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:49.744977  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.745123  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:49.745246  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:49.745415  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:49.745636  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:49.745654  762988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:47:49.861819  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:47:49.861869  762988 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:47:49.861890  762988 buildroot.go:174] setting up certificates
	I0920 18:47:49.861903  762988 provision.go:84] configureAuth start
	I0920 18:47:49.861915  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetMachineName
	I0920 18:47:49.862237  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 18:47:49.864787  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.865160  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.865188  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.865324  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:49.867360  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.867673  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:49.867699  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:49.867911  762988 provision.go:143] copyHostCerts
	I0920 18:47:49.867938  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:47:49.867981  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:47:49.867990  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:47:49.868053  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:47:49.868121  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:47:49.868140  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:47:49.868144  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:47:49.868168  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:47:49.868256  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:47:49.868279  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:47:49.868285  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:47:49.868309  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:47:49.868354  762988 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790-m02 san=[127.0.0.1 192.168.39.246 ha-525790-m02 localhost minikube]
	I0920 18:47:50.026326  762988 provision.go:177] copyRemoteCerts
	I0920 18:47:50.026387  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:47:50.026413  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.029067  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.029469  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.029558  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.029689  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.029875  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.030065  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.030209  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:47:50.113429  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:47:50.113512  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:47:50.138381  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:47:50.138457  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:47:50.162199  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:47:50.162285  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:47:50.185945  762988 provision.go:87] duration metric: took 324.027275ms to configureAuth
	I0920 18:47:50.185972  762988 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:47:50.186148  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:50.186225  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.190079  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.190492  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.190513  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.190710  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.190964  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.191145  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.191294  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.191424  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:50.191588  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:50.191602  762988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:47:50.416583  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:47:50.416624  762988 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:47:50.416631  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetURL
	I0920 18:47:50.417912  762988 main.go:141] libmachine: (ha-525790-m02) DBG | Using libvirt version 6000000
	I0920 18:47:50.420017  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.420424  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.420454  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.420641  762988 main.go:141] libmachine: Docker is up and running!
	I0920 18:47:50.420664  762988 main.go:141] libmachine: Reticulating splines...
	I0920 18:47:50.420672  762988 client.go:171] duration metric: took 24.914041264s to LocalClient.Create
	I0920 18:47:50.420699  762988 start.go:167] duration metric: took 24.914113541s to libmachine.API.Create "ha-525790"
	I0920 18:47:50.420712  762988 start.go:293] postStartSetup for "ha-525790-m02" (driver="kvm2")
	I0920 18:47:50.420726  762988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:47:50.420744  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.420995  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:47:50.421029  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.423161  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.423420  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.423447  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.423594  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.423797  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.423953  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.424081  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:47:50.505401  762988 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:47:50.510220  762988 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:47:50.510246  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:47:50.510332  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:47:50.510417  762988 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:47:50.510429  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:47:50.510527  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:47:50.520201  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:47:50.544692  762988 start.go:296] duration metric: took 123.962986ms for postStartSetup
	I0920 18:47:50.544747  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetConfigRaw
	I0920 18:47:50.545353  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 18:47:50.548132  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.548490  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.548517  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.548850  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:47:50.549085  762988 start.go:128] duration metric: took 25.06099769s to createHost
	I0920 18:47:50.549116  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.551581  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.551997  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.552025  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.552177  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.552377  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.552543  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.552681  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.552832  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:47:50.553008  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0920 18:47:50.553021  762988 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:47:50.655701  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858070.610915334
	
	I0920 18:47:50.655725  762988 fix.go:216] guest clock: 1726858070.610915334
	I0920 18:47:50.655734  762988 fix.go:229] Guest: 2024-09-20 18:47:50.610915334 +0000 UTC Remote: 2024-09-20 18:47:50.549100081 +0000 UTC m=+71.798161303 (delta=61.815253ms)
	I0920 18:47:50.655756  762988 fix.go:200] guest clock delta is within tolerance: 61.815253ms
	I0920 18:47:50.655762  762988 start.go:83] releasing machines lock for "ha-525790-m02", held for 25.167790601s
	I0920 18:47:50.655785  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.656107  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 18:47:50.658651  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.659046  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.659073  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.661685  762988 out.go:177] * Found network options:
	I0920 18:47:50.663168  762988 out.go:177]   - NO_PROXY=192.168.39.149
	W0920 18:47:50.664561  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:47:50.664590  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.665196  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.665478  762988 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 18:47:50.665602  762988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:47:50.665662  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	W0920 18:47:50.665708  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:47:50.665796  762988 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:47:50.665818  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 18:47:50.668764  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.668800  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.669194  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.669220  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.669246  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:50.669261  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:50.669369  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.669464  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 18:47:50.669573  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.669655  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 18:47:50.669713  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.669774  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 18:47:50.669844  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:47:50.669922  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 18:47:50.909505  762988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:47:50.915357  762988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:47:50.915439  762988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:47:50.932184  762988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:47:50.932206  762988 start.go:495] detecting cgroup driver to use...
	I0920 18:47:50.932266  762988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:47:50.948362  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:47:50.962800  762988 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:47:50.962889  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:47:50.976893  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:47:50.992982  762988 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:47:51.118282  762988 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:47:51.256995  762988 docker.go:233] disabling docker service ...
	I0920 18:47:51.257080  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:47:51.271445  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:47:51.284437  762988 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:47:51.427984  762988 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:47:51.540460  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:47:51.554587  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:47:51.573609  762988 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:47:51.573684  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.583854  762988 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:47:51.583919  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.594247  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.604465  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.614547  762988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:47:51.624622  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.634811  762988 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.651778  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:47:51.661817  762988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:47:51.670752  762988 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:47:51.670816  762988 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:47:51.683631  762988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:47:51.692558  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:47:51.804846  762988 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:47:51.893367  762988 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:47:51.893448  762988 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:47:51.898101  762988 start.go:563] Will wait 60s for crictl version
	I0920 18:47:51.898148  762988 ssh_runner.go:195] Run: which crictl
	I0920 18:47:51.901983  762988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:47:51.945514  762988 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:47:51.945611  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:47:51.973141  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:47:52.003666  762988 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:47:52.005189  762988 out.go:177]   - env NO_PROXY=192.168.39.149
	I0920 18:47:52.006445  762988 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 18:47:52.008892  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:52.009199  762988 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:47:40 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 18:47:52.009224  762988 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 18:47:52.009410  762988 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:47:52.013674  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:47:52.025912  762988 mustload.go:65] Loading cluster: ha-525790
	I0920 18:47:52.026090  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:52.026337  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:52.026371  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:52.041555  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
	I0920 18:47:52.042164  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:52.042654  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:52.042674  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:52.043081  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:52.043293  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:47:52.044999  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:47:52.045304  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:52.045340  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:52.060489  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44511
	I0920 18:47:52.060988  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:52.061514  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:52.061548  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:52.061872  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:52.062063  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:52.062249  762988 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.246
	I0920 18:47:52.062265  762988 certs.go:194] generating shared ca certs ...
	I0920 18:47:52.062284  762988 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:52.062496  762988 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:47:52.062557  762988 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:47:52.062572  762988 certs.go:256] generating profile certs ...
	I0920 18:47:52.062674  762988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:47:52.062712  762988 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.b06313b5
	I0920 18:47:52.062734  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.b06313b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.246 192.168.39.254]
	I0920 18:47:52.367330  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.b06313b5 ...
	I0920 18:47:52.367365  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.b06313b5: {Name:mka76a58a80092d1cbec495d718f7bdea16bb00c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:52.367534  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.b06313b5 ...
	I0920 18:47:52.367547  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.b06313b5: {Name:mkf8231ebc436432da2597e17792d752485bca58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:47:52.367622  762988 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.b06313b5 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:47:52.367755  762988 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.b06313b5 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:47:52.367883  762988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:47:52.367899  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:47:52.367912  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:47:52.367926  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:47:52.367938  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:47:52.367950  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:47:52.367961  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:47:52.367973  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:47:52.367983  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:47:52.368035  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:47:52.368066  762988 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:47:52.368075  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:47:52.368096  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:47:52.368117  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:47:52.368141  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:47:52.368184  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:47:52.368212  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:52.368225  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:47:52.368237  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:47:52.368269  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:52.371227  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:52.371645  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:52.371674  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:52.371783  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:52.371999  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:52.372168  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:52.372324  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:52.443286  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 18:47:52.448837  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 18:47:52.460311  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 18:47:52.464490  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0920 18:47:52.475983  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 18:47:52.480213  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 18:47:52.494615  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 18:47:52.499007  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0920 18:47:52.508955  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 18:47:52.516124  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 18:47:52.526659  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 18:47:52.530903  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 18:47:52.541062  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:47:52.569451  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:47:52.592930  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:47:52.616256  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:47:52.639385  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 18:47:52.662394  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:47:52.686445  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:47:52.710153  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:47:52.734191  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:47:52.757258  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:47:52.780903  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:47:52.804939  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 18:47:52.821362  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0920 18:47:52.837317  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 18:47:52.853233  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0920 18:47:52.869254  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 18:47:52.885005  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 18:47:52.900806  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 18:47:52.917027  762988 ssh_runner.go:195] Run: openssl version
	I0920 18:47:52.922702  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:47:52.933000  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:52.937464  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:52.937523  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:47:52.943170  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:47:52.953509  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:47:52.964038  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:47:52.968718  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:47:52.968771  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:47:52.974378  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:47:52.984752  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:47:52.994888  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:47:52.999311  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:47:52.999370  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:47:53.005001  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:47:53.015691  762988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:47:53.019635  762988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:47:53.019692  762988 kubeadm.go:934] updating node {m02 192.168.39.246 8443 v1.31.1 crio true true} ...
	I0920 18:47:53.019793  762988 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:47:53.019822  762988 kube-vip.go:115] generating kube-vip config ...
	I0920 18:47:53.019860  762988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:47:53.036153  762988 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:47:53.036237  762988 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:47:53.036305  762988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:47:53.046004  762988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 18:47:53.046062  762988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 18:47:53.055936  762988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 18:47:53.055979  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:47:53.056005  762988 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0920 18:47:53.056053  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:47:53.056076  762988 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0920 18:47:53.060289  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 18:47:53.060315  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 18:47:53.789944  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:47:53.790047  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:47:53.795156  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 18:47:53.795193  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 18:47:53.889636  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:47:53.918466  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:47:53.918585  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:47:53.930311  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 18:47:53.930362  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 18:47:54.378013  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 18:47:54.388156  762988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:47:54.404650  762988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:47:54.420945  762988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:47:54.437522  762988 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:47:54.441369  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:47:54.453920  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:47:54.571913  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:47:54.589386  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:47:54.589919  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:47:54.589985  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:47:54.605308  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39785
	I0920 18:47:54.605924  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:47:54.606447  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:47:54.606470  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:47:54.606870  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:47:54.607082  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:47:54.607245  762988 start.go:317] joinCluster: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:47:54.607339  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 18:47:54.607355  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:47:54.610593  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:54.611156  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:47:54.611186  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:47:54.611363  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:47:54.611536  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:47:54.611703  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:47:54.611875  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:47:54.765700  762988 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:47:54.765757  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fgipyq.kw78xdqejinofgh1 --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-525790-m02 --control-plane --apiserver-advertise-address=192.168.39.246 --apiserver-bind-port=8443"
	I0920 18:48:15.991126  762988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fgipyq.kw78xdqejinofgh1 --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-525790-m02 --control-plane --apiserver-advertise-address=192.168.39.246 --apiserver-bind-port=8443": (21.225342383s)
	I0920 18:48:15.991161  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 18:48:16.566701  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-525790-m02 minikube.k8s.io/updated_at=2024_09_20T18_48_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=ha-525790 minikube.k8s.io/primary=false
	I0920 18:48:16.719509  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-525790-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 18:48:16.847244  762988 start.go:319] duration metric: took 22.239995563s to joinCluster
	I0920 18:48:16.847322  762988 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:48:16.847615  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:48:16.849000  762988 out.go:177] * Verifying Kubernetes components...
	I0920 18:48:16.850372  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:48:17.092103  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:48:17.120788  762988 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:48:17.121173  762988 kapi.go:59] client config for ha-525790: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key", CAFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 18:48:17.121271  762988 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.149:8443
	I0920 18:48:17.121564  762988 node_ready.go:35] waiting up to 6m0s for node "ha-525790-m02" to be "Ready" ...
	I0920 18:48:17.121729  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:17.121741  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:17.121752  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:17.121758  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:17.132247  762988 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 18:48:17.622473  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:17.622504  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:17.622516  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:17.622523  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:17.625769  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:18.122399  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:18.122419  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:18.122427  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:18.122432  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:18.136165  762988 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 18:48:18.622000  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:18.622027  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:18.622037  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:18.622041  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:18.626792  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:19.122652  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:19.122677  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:19.122685  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:19.122691  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:19.125929  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:19.126379  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:19.622318  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:19.622339  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:19.622347  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:19.622351  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:19.625821  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:20.121842  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:20.121865  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:20.121874  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:20.121879  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:20.126973  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:48:20.622440  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:20.622464  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:20.622472  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:20.622476  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:20.625669  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:21.122479  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:21.122503  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:21.122514  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:21.122518  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:21.126309  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:21.127070  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:21.622431  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:21.622455  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:21.622464  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:21.622467  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:21.625353  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:22.122551  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:22.122577  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:22.122588  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:22.122594  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:22.130464  762988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 18:48:22.622444  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:22.622465  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:22.622473  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:22.622476  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:22.624966  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:23.121881  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:23.121906  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:23.121915  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:23.121918  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:23.126058  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:23.621933  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:23.621958  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:23.621967  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:23.621971  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:23.625609  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:23.626079  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:24.121954  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:24.121979  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:24.121986  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:24.121990  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:24.126296  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:24.622206  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:24.622229  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:24.622237  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:24.622241  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:24.625435  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:25.121906  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:25.121929  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:25.121937  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:25.121943  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:25.125410  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:25.622826  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:25.622865  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:25.622883  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:25.622888  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:25.626033  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:25.626689  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:26.121997  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:26.122029  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:26.122041  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:26.122047  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:26.126269  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:26.622175  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:26.622199  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:26.622207  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:26.622216  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:26.625403  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:27.122340  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:27.122371  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:27.122386  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:27.122391  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:27.126523  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:27.622670  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:27.622696  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:27.622708  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:27.622714  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:27.625864  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:28.121813  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:28.121839  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:28.121856  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:28.121861  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:28.127100  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:48:28.127893  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:28.622194  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:28.622218  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:28.622226  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:28.622231  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:28.625675  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:29.122510  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:29.122544  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:29.122556  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:29.122561  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:29.126584  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:29.622212  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:29.622230  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:29.622238  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:29.622242  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:29.625683  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:30.121899  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:30.121923  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:30.121931  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:30.121938  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:30.126500  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:30.622237  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:30.622262  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:30.622273  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:30.622282  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:30.625998  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:30.626739  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:31.122135  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:31.122162  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:31.122175  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:31.122180  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:31.126468  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:31.622529  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:31.622556  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:31.622568  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:31.622574  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:31.625581  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:32.122718  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:32.122743  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:32.122753  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:32.122758  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:32.126212  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:32.622048  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:32.622078  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:32.622090  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:32.622097  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:32.625566  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:33.122722  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:33.122748  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:33.122759  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:33.122766  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:33.125690  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:33.126429  762988 node_ready.go:53] node "ha-525790-m02" has status "Ready":"False"
	I0920 18:48:33.622805  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:33.622839  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:33.622867  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:33.622874  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:33.626126  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:34.122562  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:34.122584  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.122593  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.122596  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.125490  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.126097  762988 node_ready.go:49] node "ha-525790-m02" has status "Ready":"True"
	I0920 18:48:34.126121  762988 node_ready.go:38] duration metric: took 17.004511153s for node "ha-525790-m02" to be "Ready" ...
	I0920 18:48:34.126132  762988 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:48:34.126214  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:34.126225  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.126235  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.126244  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.130332  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:34.136520  762988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.136636  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nfnkj
	I0920 18:48:34.136651  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.136659  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.136662  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.139356  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.140019  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.140035  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.140044  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.140050  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.142804  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.143520  762988 pod_ready.go:93] pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.143541  762988 pod_ready.go:82] duration metric: took 6.997099ms for pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.143552  762988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.143630  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rpcds
	I0920 18:48:34.143640  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.143650  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.143656  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.146528  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.147267  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.147282  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.147291  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.147298  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.149448  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.149863  762988 pod_ready.go:93] pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.149880  762988 pod_ready.go:82] duration metric: took 6.32048ms for pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.149890  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.149955  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790
	I0920 18:48:34.149964  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.149974  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.149982  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.152307  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.152827  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.152841  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.152848  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.152852  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.155039  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.155552  762988 pod_ready.go:93] pod "etcd-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.155568  762988 pod_ready.go:82] duration metric: took 5.670104ms for pod "etcd-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.155578  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.155636  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790-m02
	I0920 18:48:34.155646  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.155655  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.155660  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.157775  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.158230  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:34.158244  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.158252  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.158256  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.160455  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.161045  762988 pod_ready.go:93] pod "etcd-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.161062  762988 pod_ready.go:82] duration metric: took 5.476839ms for pod "etcd-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.161078  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.323482  762988 request.go:632] Waited for 162.335052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790
	I0920 18:48:34.323561  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790
	I0920 18:48:34.323567  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.323577  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.323596  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.327021  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:34.523234  762988 request.go:632] Waited for 195.376284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.523291  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:34.523297  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.523304  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.523308  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.526504  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:34.527263  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.527282  762988 pod_ready.go:82] duration metric: took 366.197667ms for pod "kube-apiserver-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.527291  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.722970  762988 request.go:632] Waited for 195.600109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m02
	I0920 18:48:34.723047  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m02
	I0920 18:48:34.723055  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.723066  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.723077  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.727681  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:34.922800  762988 request.go:632] Waited for 194.329492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:34.922877  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:34.922883  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:34.922890  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:34.922895  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:34.925710  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:48:34.926612  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:34.926641  762988 pod_ready.go:82] duration metric: took 399.342285ms for pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:34.926656  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.122660  762988 request.go:632] Waited for 195.882629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790
	I0920 18:48:35.122740  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790
	I0920 18:48:35.122749  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.122759  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.122770  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.126705  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:35.322726  762988 request.go:632] Waited for 195.293792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:35.322782  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:35.322787  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.322795  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.322800  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.326393  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:35.326918  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:35.326946  762988 pod_ready.go:82] duration metric: took 400.278191ms for pod "kube-controller-manager-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.326961  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.523401  762988 request.go:632] Waited for 196.343619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m02
	I0920 18:48:35.523471  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m02
	I0920 18:48:35.523481  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.523489  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.523496  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.526931  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:35.722974  762988 request.go:632] Waited for 195.371903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:35.723051  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:35.723062  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.723074  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.723083  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.726332  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:35.726861  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:35.726891  762988 pod_ready.go:82] duration metric: took 399.92136ms for pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.726906  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-958jz" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:35.922820  762988 request.go:632] Waited for 195.83508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-958jz
	I0920 18:48:35.922930  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-958jz
	I0920 18:48:35.922936  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:35.922947  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:35.922954  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:35.926053  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.123110  762988 request.go:632] Waited for 196.38428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:36.123185  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:36.123190  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.123198  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.123202  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.126954  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.127418  762988 pod_ready.go:93] pod "kube-proxy-958jz" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:36.127437  762988 pod_ready.go:82] duration metric: took 400.524478ms for pod "kube-proxy-958jz" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.127449  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sspfs" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.323527  762988 request.go:632] Waited for 195.98167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sspfs
	I0920 18:48:36.323598  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sspfs
	I0920 18:48:36.323607  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.323616  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.323622  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.327351  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.523422  762988 request.go:632] Waited for 195.381458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:36.523486  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:36.523492  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.523500  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.523509  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.526668  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.527360  762988 pod_ready.go:93] pod "kube-proxy-sspfs" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:36.527381  762988 pod_ready.go:82] duration metric: took 399.9242ms for pod "kube-proxy-sspfs" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.527392  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.723613  762988 request.go:632] Waited for 196.121297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790
	I0920 18:48:36.723676  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790
	I0920 18:48:36.723681  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.723690  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.723695  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.726896  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.922949  762988 request.go:632] Waited for 195.378354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:36.923034  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:48:36.923046  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:36.923061  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:36.923071  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:36.926320  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:36.926935  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:36.926956  762988 pod_ready.go:82] duration metric: took 399.558392ms for pod "kube-scheduler-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:36.926967  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:37.122901  762988 request.go:632] Waited for 195.82569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m02
	I0920 18:48:37.122982  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m02
	I0920 18:48:37.122988  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.122996  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.123003  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.126347  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:37.323372  762988 request.go:632] Waited for 196.406319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:37.323437  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:48:37.323442  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.323450  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.323457  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.326709  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:37.327455  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:48:37.327476  762988 pod_ready.go:82] duration metric: took 400.502746ms for pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:48:37.327489  762988 pod_ready.go:39] duration metric: took 3.201339533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:48:37.327504  762988 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:48:37.327555  762988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:48:37.343797  762988 api_server.go:72] duration metric: took 20.496433387s to wait for apiserver process to appear ...
	I0920 18:48:37.343829  762988 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:48:37.343854  762988 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0920 18:48:37.348107  762988 api_server.go:279] https://192.168.39.149:8443/healthz returned 200:
	ok
	I0920 18:48:37.348169  762988 round_trippers.go:463] GET https://192.168.39.149:8443/version
	I0920 18:48:37.348176  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.348184  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.348191  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.349126  762988 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 18:48:37.349250  762988 api_server.go:141] control plane version: v1.31.1
	I0920 18:48:37.349267  762988 api_server.go:131] duration metric: took 5.431776ms to wait for apiserver health ...
	I0920 18:48:37.349274  762988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:48:37.522627  762988 request.go:632] Waited for 173.275089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:37.522715  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:37.522723  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.522731  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.522738  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.528234  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:48:37.534123  762988 system_pods.go:59] 17 kube-system pods found
	I0920 18:48:37.534155  762988 system_pods.go:61] "coredns-7c65d6cfc9-nfnkj" [7994989d-6bfa-4d25-b7b7-662d2e6c742c] Running
	I0920 18:48:37.534161  762988 system_pods.go:61] "coredns-7c65d6cfc9-rpcds" [7db58219-7147-4a45-b233-ef3c698566ef] Running
	I0920 18:48:37.534171  762988 system_pods.go:61] "etcd-ha-525790" [f23cd40e-ac8d-451b-9bf9-2ef5d62ef4b6] Running
	I0920 18:48:37.534176  762988 system_pods.go:61] "etcd-ha-525790-m02" [5a29103e-6da3-40d1-be3c-58fdc0f28b54] Running
	I0920 18:48:37.534181  762988 system_pods.go:61] "kindnet-8glgp" [f462782e-1ff6-410a-8359-de3360d380b0] Running
	I0920 18:48:37.534186  762988 system_pods.go:61] "kindnet-9qbm6" [87e8ae18-a561-48ec-9835-27446b6917d3] Running
	I0920 18:48:37.534190  762988 system_pods.go:61] "kube-apiserver-ha-525790" [0e3563fd-5185-4dc6-8d9b-a7d954b96c8d] Running
	I0920 18:48:37.534195  762988 system_pods.go:61] "kube-apiserver-ha-525790-m02" [b3966e2e-ce3d-4916-b73c-0d80cd1793f0] Running
	I0920 18:48:37.534202  762988 system_pods.go:61] "kube-controller-manager-ha-525790" [1d695853-6a7e-487d-a52b-9aceb1fc9ff3] Running
	I0920 18:48:37.534210  762988 system_pods.go:61] "kube-controller-manager-ha-525790-m02" [090c1833-3800-4e13-b9a7-c03680f3d55d] Running
	I0920 18:48:37.534213  762988 system_pods.go:61] "kube-proxy-958jz" [46603403-eb82-4f15-a1da-da62194a072f] Running
	I0920 18:48:37.534216  762988 system_pods.go:61] "kube-proxy-sspfs" [15203515-fc45-4624-b97e-8ec247f01e2d] Running
	I0920 18:48:37.534221  762988 system_pods.go:61] "kube-scheduler-ha-525790" [8cb7e23e-c1d1-4753-9758-b17ef9fd08d7] Running
	I0920 18:48:37.534224  762988 system_pods.go:61] "kube-scheduler-ha-525790-m02" [dc9a5561-5d41-445d-a0ba-de3b2405f821] Running
	I0920 18:48:37.534228  762988 system_pods.go:61] "kube-vip-ha-525790" [0b318b1e-7a85-4c8c-8a5a-2fee226d7702] Running
	I0920 18:48:37.534231  762988 system_pods.go:61] "kube-vip-ha-525790-m02" [f2316231-5c1d-4bf2-ae62-5a4202b5818b] Running
	I0920 18:48:37.534234  762988 system_pods.go:61] "storage-provisioner" [ea6bf34f-c1f7-4216-a61f-be30846c991b] Running
	I0920 18:48:37.534241  762988 system_pods.go:74] duration metric: took 184.960329ms to wait for pod list to return data ...
	I0920 18:48:37.534252  762988 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:48:37.722639  762988 request.go:632] Waited for 188.265166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:48:37.722711  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:48:37.722717  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.722726  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.722730  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.726193  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:37.726449  762988 default_sa.go:45] found service account: "default"
	I0920 18:48:37.726469  762988 default_sa.go:55] duration metric: took 192.210022ms for default service account to be created ...
	I0920 18:48:37.726480  762988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:48:37.922955  762988 request.go:632] Waited for 196.382479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:37.923039  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:48:37.923050  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:37.923065  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:37.923072  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:37.927492  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:48:37.932712  762988 system_pods.go:86] 17 kube-system pods found
	I0920 18:48:37.932740  762988 system_pods.go:89] "coredns-7c65d6cfc9-nfnkj" [7994989d-6bfa-4d25-b7b7-662d2e6c742c] Running
	I0920 18:48:37.932746  762988 system_pods.go:89] "coredns-7c65d6cfc9-rpcds" [7db58219-7147-4a45-b233-ef3c698566ef] Running
	I0920 18:48:37.932750  762988 system_pods.go:89] "etcd-ha-525790" [f23cd40e-ac8d-451b-9bf9-2ef5d62ef4b6] Running
	I0920 18:48:37.932754  762988 system_pods.go:89] "etcd-ha-525790-m02" [5a29103e-6da3-40d1-be3c-58fdc0f28b54] Running
	I0920 18:48:37.932757  762988 system_pods.go:89] "kindnet-8glgp" [f462782e-1ff6-410a-8359-de3360d380b0] Running
	I0920 18:48:37.932761  762988 system_pods.go:89] "kindnet-9qbm6" [87e8ae18-a561-48ec-9835-27446b6917d3] Running
	I0920 18:48:37.932765  762988 system_pods.go:89] "kube-apiserver-ha-525790" [0e3563fd-5185-4dc6-8d9b-a7d954b96c8d] Running
	I0920 18:48:37.932769  762988 system_pods.go:89] "kube-apiserver-ha-525790-m02" [b3966e2e-ce3d-4916-b73c-0d80cd1793f0] Running
	I0920 18:48:37.932774  762988 system_pods.go:89] "kube-controller-manager-ha-525790" [1d695853-6a7e-487d-a52b-9aceb1fc9ff3] Running
	I0920 18:48:37.932779  762988 system_pods.go:89] "kube-controller-manager-ha-525790-m02" [090c1833-3800-4e13-b9a7-c03680f3d55d] Running
	I0920 18:48:37.932786  762988 system_pods.go:89] "kube-proxy-958jz" [46603403-eb82-4f15-a1da-da62194a072f] Running
	I0920 18:48:37.932789  762988 system_pods.go:89] "kube-proxy-sspfs" [15203515-fc45-4624-b97e-8ec247f01e2d] Running
	I0920 18:48:37.932792  762988 system_pods.go:89] "kube-scheduler-ha-525790" [8cb7e23e-c1d1-4753-9758-b17ef9fd08d7] Running
	I0920 18:48:37.932797  762988 system_pods.go:89] "kube-scheduler-ha-525790-m02" [dc9a5561-5d41-445d-a0ba-de3b2405f821] Running
	I0920 18:48:37.932800  762988 system_pods.go:89] "kube-vip-ha-525790" [0b318b1e-7a85-4c8c-8a5a-2fee226d7702] Running
	I0920 18:48:37.932805  762988 system_pods.go:89] "kube-vip-ha-525790-m02" [f2316231-5c1d-4bf2-ae62-5a4202b5818b] Running
	I0920 18:48:37.932808  762988 system_pods.go:89] "storage-provisioner" [ea6bf34f-c1f7-4216-a61f-be30846c991b] Running
	I0920 18:48:37.932815  762988 system_pods.go:126] duration metric: took 206.326319ms to wait for k8s-apps to be running ...
	I0920 18:48:37.932824  762988 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:48:37.932877  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:48:37.949333  762988 system_svc.go:56] duration metric: took 16.495186ms WaitForService to wait for kubelet
	I0920 18:48:37.949367  762988 kubeadm.go:582] duration metric: took 21.102009969s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:48:37.949386  762988 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:48:38.122741  762988 request.go:632] Waited for 173.263132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes
	I0920 18:48:38.122838  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes
	I0920 18:48:38.122859  762988 round_trippers.go:469] Request Headers:
	I0920 18:48:38.122875  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:48:38.122883  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:48:38.126598  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:48:38.127344  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:48:38.127374  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:48:38.127387  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:48:38.127390  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:48:38.127395  762988 node_conditions.go:105] duration metric: took 178.00469ms to run NodePressure ...
	I0920 18:48:38.127407  762988 start.go:241] waiting for startup goroutines ...
	I0920 18:48:38.127433  762988 start.go:255] writing updated cluster config ...
	I0920 18:48:38.129743  762988 out.go:201] 
	I0920 18:48:38.131559  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:48:38.131667  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:48:38.133474  762988 out.go:177] * Starting "ha-525790-m03" control-plane node in "ha-525790" cluster
	I0920 18:48:38.134688  762988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:48:38.134716  762988 cache.go:56] Caching tarball of preloaded images
	I0920 18:48:38.134840  762988 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:48:38.134876  762988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:48:38.135002  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:48:38.135229  762988 start.go:360] acquireMachinesLock for ha-525790-m03: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:48:38.135283  762988 start.go:364] duration metric: took 31.132µs to acquireMachinesLock for "ha-525790-m03"
	I0920 18:48:38.135310  762988 start.go:93] Provisioning new machine with config: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:48:38.135483  762988 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0920 18:48:38.137252  762988 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 18:48:38.137351  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:48:38.137389  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:48:38.152991  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40037
	I0920 18:48:38.153403  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:48:38.153921  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:48:38.153950  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:48:38.154269  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:48:38.154503  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetMachineName
	I0920 18:48:38.154635  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:48:38.154794  762988 start.go:159] libmachine.API.Create for "ha-525790" (driver="kvm2")
	I0920 18:48:38.154827  762988 client.go:168] LocalClient.Create starting
	I0920 18:48:38.154887  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 18:48:38.154928  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:48:38.154951  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:48:38.155015  762988 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 18:48:38.155046  762988 main.go:141] libmachine: Decoding PEM data...
	I0920 18:48:38.155064  762988 main.go:141] libmachine: Parsing certificate...
	I0920 18:48:38.155089  762988 main.go:141] libmachine: Running pre-create checks...
	I0920 18:48:38.155100  762988 main.go:141] libmachine: (ha-525790-m03) Calling .PreCreateCheck
	I0920 18:48:38.155260  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetConfigRaw
	I0920 18:48:38.155601  762988 main.go:141] libmachine: Creating machine...
	I0920 18:48:38.155615  762988 main.go:141] libmachine: (ha-525790-m03) Calling .Create
	I0920 18:48:38.155731  762988 main.go:141] libmachine: (ha-525790-m03) Creating KVM machine...
	I0920 18:48:38.156940  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found existing default KVM network
	I0920 18:48:38.157092  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found existing private KVM network mk-ha-525790
	I0920 18:48:38.157240  762988 main.go:141] libmachine: (ha-525790-m03) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03 ...
	I0920 18:48:38.157269  762988 main.go:141] libmachine: (ha-525790-m03) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:48:38.157310  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:38.157208  763765 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:48:38.157402  762988 main.go:141] libmachine: (ha-525790-m03) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 18:48:38.440404  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:38.440283  763765 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa...
	I0920 18:48:38.491702  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:38.491581  763765 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/ha-525790-m03.rawdisk...
	I0920 18:48:38.491754  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Writing magic tar header
	I0920 18:48:38.491768  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Writing SSH key tar header
	I0920 18:48:38.491779  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:38.491723  763765 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03 ...
	I0920 18:48:38.491856  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03
	I0920 18:48:38.491883  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03 (perms=drwx------)
	I0920 18:48:38.491895  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 18:48:38.491911  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:48:38.491922  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 18:48:38.491935  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 18:48:38.491947  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 18:48:38.491958  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 18:48:38.491971  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 18:48:38.491983  762988 main.go:141] libmachine: (ha-525790-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 18:48:38.491992  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 18:48:38.492002  762988 main.go:141] libmachine: (ha-525790-m03) Creating domain...
	I0920 18:48:38.492014  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home/jenkins
	I0920 18:48:38.492025  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Checking permissions on dir: /home
	I0920 18:48:38.492039  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Skipping /home - not owner
	I0920 18:48:38.492931  762988 main.go:141] libmachine: (ha-525790-m03) define libvirt domain using xml: 
	I0920 18:48:38.492957  762988 main.go:141] libmachine: (ha-525790-m03) <domain type='kvm'>
	I0920 18:48:38.492966  762988 main.go:141] libmachine: (ha-525790-m03)   <name>ha-525790-m03</name>
	I0920 18:48:38.492979  762988 main.go:141] libmachine: (ha-525790-m03)   <memory unit='MiB'>2200</memory>
	I0920 18:48:38.492990  762988 main.go:141] libmachine: (ha-525790-m03)   <vcpu>2</vcpu>
	I0920 18:48:38.492996  762988 main.go:141] libmachine: (ha-525790-m03)   <features>
	I0920 18:48:38.493008  762988 main.go:141] libmachine: (ha-525790-m03)     <acpi/>
	I0920 18:48:38.493014  762988 main.go:141] libmachine: (ha-525790-m03)     <apic/>
	I0920 18:48:38.493024  762988 main.go:141] libmachine: (ha-525790-m03)     <pae/>
	I0920 18:48:38.493031  762988 main.go:141] libmachine: (ha-525790-m03)     
	I0920 18:48:38.493036  762988 main.go:141] libmachine: (ha-525790-m03)   </features>
	I0920 18:48:38.493042  762988 main.go:141] libmachine: (ha-525790-m03)   <cpu mode='host-passthrough'>
	I0920 18:48:38.493047  762988 main.go:141] libmachine: (ha-525790-m03)   
	I0920 18:48:38.493051  762988 main.go:141] libmachine: (ha-525790-m03)   </cpu>
	I0920 18:48:38.493058  762988 main.go:141] libmachine: (ha-525790-m03)   <os>
	I0920 18:48:38.493071  762988 main.go:141] libmachine: (ha-525790-m03)     <type>hvm</type>
	I0920 18:48:38.493106  762988 main.go:141] libmachine: (ha-525790-m03)     <boot dev='cdrom'/>
	I0920 18:48:38.493129  762988 main.go:141] libmachine: (ha-525790-m03)     <boot dev='hd'/>
	I0920 18:48:38.493143  762988 main.go:141] libmachine: (ha-525790-m03)     <bootmenu enable='no'/>
	I0920 18:48:38.493157  762988 main.go:141] libmachine: (ha-525790-m03)   </os>
	I0920 18:48:38.493169  762988 main.go:141] libmachine: (ha-525790-m03)   <devices>
	I0920 18:48:38.493180  762988 main.go:141] libmachine: (ha-525790-m03)     <disk type='file' device='cdrom'>
	I0920 18:48:38.493199  762988 main.go:141] libmachine: (ha-525790-m03)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/boot2docker.iso'/>
	I0920 18:48:38.493210  762988 main.go:141] libmachine: (ha-525790-m03)       <target dev='hdc' bus='scsi'/>
	I0920 18:48:38.493219  762988 main.go:141] libmachine: (ha-525790-m03)       <readonly/>
	I0920 18:48:38.493233  762988 main.go:141] libmachine: (ha-525790-m03)     </disk>
	I0920 18:48:38.493245  762988 main.go:141] libmachine: (ha-525790-m03)     <disk type='file' device='disk'>
	I0920 18:48:38.493262  762988 main.go:141] libmachine: (ha-525790-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 18:48:38.493279  762988 main.go:141] libmachine: (ha-525790-m03)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/ha-525790-m03.rawdisk'/>
	I0920 18:48:38.493292  762988 main.go:141] libmachine: (ha-525790-m03)       <target dev='hda' bus='virtio'/>
	I0920 18:48:38.493309  762988 main.go:141] libmachine: (ha-525790-m03)     </disk>
	I0920 18:48:38.493325  762988 main.go:141] libmachine: (ha-525790-m03)     <interface type='network'>
	I0920 18:48:38.493333  762988 main.go:141] libmachine: (ha-525790-m03)       <source network='mk-ha-525790'/>
	I0920 18:48:38.493341  762988 main.go:141] libmachine: (ha-525790-m03)       <model type='virtio'/>
	I0920 18:48:38.493348  762988 main.go:141] libmachine: (ha-525790-m03)     </interface>
	I0920 18:48:38.493354  762988 main.go:141] libmachine: (ha-525790-m03)     <interface type='network'>
	I0920 18:48:38.493361  762988 main.go:141] libmachine: (ha-525790-m03)       <source network='default'/>
	I0920 18:48:38.493368  762988 main.go:141] libmachine: (ha-525790-m03)       <model type='virtio'/>
	I0920 18:48:38.493373  762988 main.go:141] libmachine: (ha-525790-m03)     </interface>
	I0920 18:48:38.493379  762988 main.go:141] libmachine: (ha-525790-m03)     <serial type='pty'>
	I0920 18:48:38.493384  762988 main.go:141] libmachine: (ha-525790-m03)       <target port='0'/>
	I0920 18:48:38.493391  762988 main.go:141] libmachine: (ha-525790-m03)     </serial>
	I0920 18:48:38.493400  762988 main.go:141] libmachine: (ha-525790-m03)     <console type='pty'>
	I0920 18:48:38.493407  762988 main.go:141] libmachine: (ha-525790-m03)       <target type='serial' port='0'/>
	I0920 18:48:38.493412  762988 main.go:141] libmachine: (ha-525790-m03)     </console>
	I0920 18:48:38.493418  762988 main.go:141] libmachine: (ha-525790-m03)     <rng model='virtio'>
	I0920 18:48:38.493427  762988 main.go:141] libmachine: (ha-525790-m03)       <backend model='random'>/dev/random</backend>
	I0920 18:48:38.493440  762988 main.go:141] libmachine: (ha-525790-m03)     </rng>
	I0920 18:48:38.493450  762988 main.go:141] libmachine: (ha-525790-m03)     
	I0920 18:48:38.493460  762988 main.go:141] libmachine: (ha-525790-m03)     
	I0920 18:48:38.493468  762988 main.go:141] libmachine: (ha-525790-m03)   </devices>
	I0920 18:48:38.493474  762988 main.go:141] libmachine: (ha-525790-m03) </domain>
	I0920 18:48:38.493482  762988 main.go:141] libmachine: (ha-525790-m03) 
	I0920 18:48:38.499885  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:a8:31:1e in network default
	I0920 18:48:38.500386  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:38.500420  762988 main.go:141] libmachine: (ha-525790-m03) Ensuring networks are active...
	I0920 18:48:38.501164  762988 main.go:141] libmachine: (ha-525790-m03) Ensuring network default is active
	I0920 18:48:38.501467  762988 main.go:141] libmachine: (ha-525790-m03) Ensuring network mk-ha-525790 is active
	I0920 18:48:38.501827  762988 main.go:141] libmachine: (ha-525790-m03) Getting domain xml...
	I0920 18:48:38.502449  762988 main.go:141] libmachine: (ha-525790-m03) Creating domain...
	I0920 18:48:39.736443  762988 main.go:141] libmachine: (ha-525790-m03) Waiting to get IP...
	I0920 18:48:39.737400  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:39.737834  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:39.737861  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:39.737801  763765 retry.go:31] will retry after 302.940885ms: waiting for machine to come up
	I0920 18:48:40.042424  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:40.043046  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:40.043071  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:40.042996  763765 retry.go:31] will retry after 350.440595ms: waiting for machine to come up
	I0920 18:48:40.395674  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:40.396221  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:40.396257  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:40.396163  763765 retry.go:31] will retry after 469.287011ms: waiting for machine to come up
	I0920 18:48:40.866499  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:40.866994  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:40.867018  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:40.866942  763765 retry.go:31] will retry after 590.023713ms: waiting for machine to come up
	I0920 18:48:41.458823  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:41.459324  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:41.459354  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:41.459270  763765 retry.go:31] will retry after 548.369209ms: waiting for machine to come up
	I0920 18:48:42.009043  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:42.009525  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:42.009554  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:42.009477  763765 retry.go:31] will retry after 690.597661ms: waiting for machine to come up
	I0920 18:48:42.701450  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:42.701900  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:42.701929  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:42.701849  763765 retry.go:31] will retry after 975.285461ms: waiting for machine to come up
	I0920 18:48:43.678426  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:43.678873  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:43.678903  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:43.678807  763765 retry.go:31] will retry after 921.744359ms: waiting for machine to come up
	I0920 18:48:44.601892  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:44.602442  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:44.602473  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:44.602393  763765 retry.go:31] will retry after 1.426461906s: waiting for machine to come up
	I0920 18:48:46.031141  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:46.031614  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:46.031647  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:46.031561  763765 retry.go:31] will retry after 1.995117324s: waiting for machine to come up
	I0920 18:48:48.028189  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:48.028849  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:48.028882  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:48.028801  763765 retry.go:31] will retry after 2.180775421s: waiting for machine to come up
	I0920 18:48:50.212117  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:50.212617  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:50.212648  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:50.212544  763765 retry.go:31] will retry after 2.921621074s: waiting for machine to come up
	I0920 18:48:53.136087  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:53.136635  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:53.136663  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:53.136590  763765 retry.go:31] will retry after 2.977541046s: waiting for machine to come up
	I0920 18:48:56.115874  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:48:56.116235  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find current IP address of domain ha-525790-m03 in network mk-ha-525790
	I0920 18:48:56.116257  762988 main.go:141] libmachine: (ha-525790-m03) DBG | I0920 18:48:56.116195  763765 retry.go:31] will retry after 3.995277529s: waiting for machine to come up
	I0920 18:49:00.113196  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.113677  762988 main.go:141] libmachine: (ha-525790-m03) Found IP for machine: 192.168.39.105
	I0920 18:49:00.113703  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has current primary IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.113712  762988 main.go:141] libmachine: (ha-525790-m03) Reserving static IP address...
	I0920 18:49:00.114010  762988 main.go:141] libmachine: (ha-525790-m03) DBG | unable to find host DHCP lease matching {name: "ha-525790-m03", mac: "52:54:00:c8:21:86", ip: "192.168.39.105"} in network mk-ha-525790
	I0920 18:49:00.188644  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Getting to WaitForSSH function...
	I0920 18:49:00.188711  762988 main.go:141] libmachine: (ha-525790-m03) Reserved static IP address: 192.168.39.105
	I0920 18:49:00.188740  762988 main.go:141] libmachine: (ha-525790-m03) Waiting for SSH to be available...
	I0920 18:49:00.191758  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.192256  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.192284  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.192476  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Using SSH client type: external
	I0920 18:49:00.192503  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa (-rw-------)
	I0920 18:49:00.192535  762988 main.go:141] libmachine: (ha-525790-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 18:49:00.192565  762988 main.go:141] libmachine: (ha-525790-m03) DBG | About to run SSH command:
	I0920 18:49:00.192608  762988 main.go:141] libmachine: (ha-525790-m03) DBG | exit 0
	I0920 18:49:00.319098  762988 main.go:141] libmachine: (ha-525790-m03) DBG | SSH cmd err, output: <nil>: 
	I0920 18:49:00.319375  762988 main.go:141] libmachine: (ha-525790-m03) KVM machine creation complete!
	I0920 18:49:00.319707  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetConfigRaw
	I0920 18:49:00.320287  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:00.320484  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:00.320624  762988 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 18:49:00.320639  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetState
	I0920 18:49:00.321930  762988 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 18:49:00.321949  762988 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 18:49:00.321957  762988 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 18:49:00.321965  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.324623  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.325172  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.325194  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.325388  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:00.325587  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.325771  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.325922  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:00.326093  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:00.326319  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:00.326331  762988 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 18:49:00.430187  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:49:00.430218  762988 main.go:141] libmachine: Detecting the provisioner...
	I0920 18:49:00.430229  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.433076  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.433420  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.433448  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.433596  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:00.433812  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.433990  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.434135  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:00.434275  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:00.434454  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:00.434466  762988 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 18:49:00.539754  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 18:49:00.539823  762988 main.go:141] libmachine: found compatible host: buildroot
	I0920 18:49:00.539832  762988 main.go:141] libmachine: Provisioning with buildroot...
	I0920 18:49:00.539852  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetMachineName
	I0920 18:49:00.540100  762988 buildroot.go:166] provisioning hostname "ha-525790-m03"
	I0920 18:49:00.540117  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetMachineName
	I0920 18:49:00.540338  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.543112  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.543620  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.543653  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.543781  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:00.543968  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.544100  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.544196  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:00.544321  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:00.544478  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:00.544494  762988 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790-m03 && echo "ha-525790-m03" | sudo tee /etc/hostname
	I0920 18:49:00.661965  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790-m03
	
	I0920 18:49:00.661996  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.665201  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.665573  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.665605  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.665825  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:00.666001  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.666174  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:00.666276  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:00.666436  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:00.666619  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:00.666635  762988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:49:00.779769  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:49:00.779801  762988 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:49:00.779819  762988 buildroot.go:174] setting up certificates
	I0920 18:49:00.779830  762988 provision.go:84] configureAuth start
	I0920 18:49:00.779838  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetMachineName
	I0920 18:49:00.780148  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetIP
	I0920 18:49:00.783087  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.783547  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.783572  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.783793  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:00.786303  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.786669  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:00.786697  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:00.786832  762988 provision.go:143] copyHostCerts
	I0920 18:49:00.786879  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:49:00.786917  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:49:00.786928  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:49:00.787003  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:49:00.787095  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:49:00.787123  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:49:00.787129  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:49:00.787169  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:49:00.787241  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:49:00.787266  762988 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:49:00.787273  762988 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:49:00.787297  762988 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:49:00.787351  762988 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790-m03 san=[127.0.0.1 192.168.39.105 ha-525790-m03 localhost minikube]
	I0920 18:49:01.027593  762988 provision.go:177] copyRemoteCerts
	I0920 18:49:01.027666  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:49:01.027706  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.030883  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.031239  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.031269  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.031374  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.031584  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.031757  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.031880  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:49:01.112943  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:49:01.113017  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:49:01.137911  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:49:01.138012  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:49:01.162029  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:49:01.162099  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:49:01.186294  762988 provision.go:87] duration metric: took 406.448312ms to configureAuth
	I0920 18:49:01.186330  762988 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:49:01.186601  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:49:01.186679  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.189283  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.189565  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.189599  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.189778  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.190004  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.190151  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.190284  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.190437  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:01.190651  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:01.190666  762988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:49:01.415670  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:49:01.415702  762988 main.go:141] libmachine: Checking connection to Docker...
	I0920 18:49:01.415710  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetURL
	I0920 18:49:01.417024  762988 main.go:141] libmachine: (ha-525790-m03) DBG | Using libvirt version 6000000
	I0920 18:49:01.419032  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.419386  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.419434  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.419554  762988 main.go:141] libmachine: Docker is up and running!
	I0920 18:49:01.419580  762988 main.go:141] libmachine: Reticulating splines...
	I0920 18:49:01.419588  762988 client.go:171] duration metric: took 23.264752776s to LocalClient.Create
	I0920 18:49:01.419627  762988 start.go:167] duration metric: took 23.26482906s to libmachine.API.Create "ha-525790"
	I0920 18:49:01.419643  762988 start.go:293] postStartSetup for "ha-525790-m03" (driver="kvm2")
	I0920 18:49:01.419656  762988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:49:01.419679  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.419934  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:49:01.419967  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.422004  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.422361  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.422390  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.422501  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.422709  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.422888  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.423046  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:49:01.505266  762988 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:49:01.509857  762988 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:49:01.509888  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:49:01.509961  762988 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:49:01.510060  762988 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:49:01.510077  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:49:01.510189  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:49:01.520278  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:49:01.544737  762988 start.go:296] duration metric: took 125.077677ms for postStartSetup
	I0920 18:49:01.544786  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetConfigRaw
	I0920 18:49:01.545420  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetIP
	I0920 18:49:01.548112  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.548447  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.548464  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.548782  762988 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:49:01.549036  762988 start.go:128] duration metric: took 23.413540127s to createHost
	I0920 18:49:01.549067  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.551495  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.551851  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.551881  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.552018  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.552201  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.552360  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.552475  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.552663  762988 main.go:141] libmachine: Using SSH client type: native
	I0920 18:49:01.552890  762988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0920 18:49:01.552905  762988 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:49:01.655748  762988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858141.628739337
	
	I0920 18:49:01.655773  762988 fix.go:216] guest clock: 1726858141.628739337
	I0920 18:49:01.655781  762988 fix.go:229] Guest: 2024-09-20 18:49:01.628739337 +0000 UTC Remote: 2024-09-20 18:49:01.549050778 +0000 UTC m=+142.798112058 (delta=79.688559ms)
	I0920 18:49:01.655798  762988 fix.go:200] guest clock delta is within tolerance: 79.688559ms
	I0920 18:49:01.655803  762988 start.go:83] releasing machines lock for "ha-525790-m03", held for 23.520508822s
	I0920 18:49:01.655836  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.656125  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetIP
	I0920 18:49:01.658823  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.659297  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.659334  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.661900  762988 out.go:177] * Found network options:
	I0920 18:49:01.663362  762988 out.go:177]   - NO_PROXY=192.168.39.149,192.168.39.246
	W0920 18:49:01.664757  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 18:49:01.664778  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:49:01.664795  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.665398  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.665614  762988 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:49:01.665705  762988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:49:01.665745  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	W0920 18:49:01.665812  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 18:49:01.665852  762988 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 18:49:01.665930  762988 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:49:01.665957  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:49:01.668602  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.668630  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.669063  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.669134  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:01.669160  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.669251  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:01.669405  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.669623  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:49:01.669648  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.669763  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:49:01.669772  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.669900  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:49:01.669898  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:49:01.670073  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:49:01.914294  762988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:49:01.920631  762988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:49:01.920746  762988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:49:01.939203  762988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 18:49:01.939233  762988 start.go:495] detecting cgroup driver to use...
	I0920 18:49:01.939298  762988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:49:01.956879  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:49:01.972680  762988 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:49:01.972737  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:49:01.986983  762988 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:49:02.002057  762988 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:49:02.127309  762988 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:49:02.284949  762988 docker.go:233] disabling docker service ...
	I0920 18:49:02.285026  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:49:02.300753  762988 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:49:02.314717  762988 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:49:02.455235  762988 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:49:02.575677  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:49:02.589417  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:49:02.609243  762988 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:49:02.609306  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.619812  762988 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:49:02.619883  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.630268  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.640696  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.651017  762988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:49:02.661779  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.672169  762988 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.689257  762988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:49:02.699324  762988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:49:02.708522  762988 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 18:49:02.708581  762988 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 18:49:02.724380  762988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:49:02.735250  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:49:02.845773  762988 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:49:02.940137  762988 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:49:02.940234  762988 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:49:02.945137  762988 start.go:563] Will wait 60s for crictl version
	I0920 18:49:02.945195  762988 ssh_runner.go:195] Run: which crictl
	I0920 18:49:02.949025  762988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:49:02.985466  762988 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:49:02.985563  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:49:03.014070  762988 ssh_runner.go:195] Run: crio --version
	I0920 18:49:03.043847  762988 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:49:03.045096  762988 out.go:177]   - env NO_PROXY=192.168.39.149
	I0920 18:49:03.046434  762988 out.go:177]   - env NO_PROXY=192.168.39.149,192.168.39.246
	I0920 18:49:03.047542  762988 main.go:141] libmachine: (ha-525790-m03) Calling .GetIP
	I0920 18:49:03.050349  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:03.050680  762988 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:49:03.050706  762988 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:49:03.050945  762988 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:49:03.055055  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:49:03.067151  762988 mustload.go:65] Loading cluster: ha-525790
	I0920 18:49:03.067360  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:49:03.067653  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:49:03.067702  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:49:03.083141  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I0920 18:49:03.083620  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:49:03.084155  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:49:03.084195  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:49:03.084513  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:49:03.084805  762988 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:49:03.086455  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:49:03.086791  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:49:03.086828  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:49:03.102141  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39347
	I0920 18:49:03.102510  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:49:03.103060  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:49:03.103086  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:49:03.103433  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:49:03.103638  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:49:03.103800  762988 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.105
	I0920 18:49:03.103812  762988 certs.go:194] generating shared ca certs ...
	I0920 18:49:03.103827  762988 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:49:03.103970  762988 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:49:03.104025  762988 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:49:03.104040  762988 certs.go:256] generating profile certs ...
	I0920 18:49:03.104161  762988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:49:03.104187  762988 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.482e4680
	I0920 18:49:03.104203  762988 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.482e4680 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.246 192.168.39.105 192.168.39.254]
	I0920 18:49:03.247720  762988 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.482e4680 ...
	I0920 18:49:03.247759  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.482e4680: {Name:mk130da53fe193e08a7298b921e0e7264fd28276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:49:03.247934  762988 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.482e4680 ...
	I0920 18:49:03.247946  762988 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.482e4680: {Name:mk01fbdfb06a85f266d7928f14dec501e347df1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:49:03.248017  762988 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.482e4680 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:49:03.248149  762988 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.482e4680 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:49:03.248278  762988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:49:03.248294  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:49:03.248307  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:49:03.248321  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:49:03.248333  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:49:03.248345  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:49:03.248357  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:49:03.248369  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:49:03.270972  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:49:03.271068  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:49:03.271105  762988 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:49:03.271116  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:49:03.271137  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:49:03.271158  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:49:03.271180  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:49:03.271215  762988 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:49:03.271243  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:49:03.271257  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:49:03.271268  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:49:03.271305  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:49:03.274365  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:49:03.274796  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:49:03.274826  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:49:03.275040  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:49:03.275257  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:49:03.275432  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:49:03.275609  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:49:03.347244  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 18:49:03.352573  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 18:49:03.366074  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 18:49:03.370940  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0920 18:49:03.383525  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 18:49:03.387790  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 18:49:03.401524  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 18:49:03.406898  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0920 18:49:03.418198  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 18:49:03.422213  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 18:49:03.432483  762988 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 18:49:03.436644  762988 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 18:49:03.447720  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:49:03.473142  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:49:03.497800  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:49:03.522032  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:49:03.546357  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0920 18:49:03.569451  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:49:03.592748  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:49:03.618320  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:49:03.643316  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:49:03.669027  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:49:03.693106  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:49:03.717412  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 18:49:03.736210  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0920 18:49:03.752820  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 18:49:03.769208  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0920 18:49:03.786468  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 18:49:03.803392  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 18:49:03.819806  762988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 18:49:03.836525  762988 ssh_runner.go:195] Run: openssl version
	I0920 18:49:03.842244  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:49:03.852769  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:49:03.857540  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:49:03.857596  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:49:03.863268  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:49:03.873806  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:49:03.884262  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:49:03.888603  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:49:03.888657  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:49:03.894115  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:49:03.904764  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:49:03.915491  762988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:49:03.920009  762988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:49:03.920061  762988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:49:03.925625  762988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:49:03.936257  762988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:49:03.940216  762988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:49:03.940272  762988 kubeadm.go:934] updating node {m03 192.168.39.105 8443 v1.31.1 crio true true} ...
	I0920 18:49:03.940372  762988 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:49:03.940409  762988 kube-vip.go:115] generating kube-vip config ...
	I0920 18:49:03.940448  762988 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:49:03.957917  762988 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:49:03.958005  762988 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:49:03.958067  762988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:49:03.967572  762988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 18:49:03.967624  762988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 18:49:03.976974  762988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 18:49:03.976987  762988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 18:49:03.977005  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:49:03.976978  762988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 18:49:03.977048  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:49:03.977060  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 18:49:03.977022  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:49:03.977160  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 18:49:03.986571  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 18:49:03.986605  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 18:49:03.986658  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 18:49:03.986692  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 18:49:04.010382  762988 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:49:04.010507  762988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 18:49:04.099814  762988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 18:49:04.099870  762988 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 18:49:04.872454  762988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 18:49:04.882387  762988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 18:49:04.899462  762988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:49:04.916731  762988 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:49:04.933245  762988 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:49:04.937315  762988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:49:04.950503  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:49:05.076487  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:49:05.092667  762988 host.go:66] Checking if "ha-525790" exists ...
	I0920 18:49:05.093146  762988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:49:05.093208  762988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:49:05.109982  762988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37499
	I0920 18:49:05.110528  762988 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:49:05.111155  762988 main.go:141] libmachine: Using API Version  1
	I0920 18:49:05.111179  762988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:49:05.111484  762988 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:49:05.111774  762988 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:49:05.111942  762988 start.go:317] joinCluster: &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:49:05.112135  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 18:49:05.112159  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:49:05.115062  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:49:05.115484  762988 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:49:05.115515  762988 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:49:05.115682  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:49:05.115883  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:49:05.116066  762988 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:49:05.116238  762988 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:49:05.305796  762988 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:49:05.305864  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 39ds8x.uncxzpvszbuvr57z --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-525790-m03 --control-plane --apiserver-advertise-address=192.168.39.105 --apiserver-bind-port=8443"
	I0920 18:49:27.719468  762988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 39ds8x.uncxzpvszbuvr57z --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-525790-m03 --control-plane --apiserver-advertise-address=192.168.39.105 --apiserver-bind-port=8443": (22.413569312s)
	I0920 18:49:27.719513  762988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 18:49:28.224417  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-525790-m03 minikube.k8s.io/updated_at=2024_09_20T18_49_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=ha-525790 minikube.k8s.io/primary=false
	I0920 18:49:28.363168  762988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-525790-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 18:49:28.483620  762988 start.go:319] duration metric: took 23.371650439s to joinCluster
	I0920 18:49:28.484099  762988 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:49:28.484156  762988 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:49:28.485758  762988 out.go:177] * Verifying Kubernetes components...
	I0920 18:49:28.487390  762988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:49:28.832062  762988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:49:28.888819  762988 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:49:28.889070  762988 kapi.go:59] client config for ha-525790: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key", CAFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 18:49:28.889131  762988 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.149:8443
	I0920 18:49:28.889340  762988 node_ready.go:35] waiting up to 6m0s for node "ha-525790-m03" to be "Ready" ...
	I0920 18:49:28.889437  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:28.889450  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:28.889462  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:28.889469  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:28.893312  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:29.389975  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:29.390001  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:29.390011  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:29.390015  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:29.393538  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:29.890123  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:29.890149  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:29.890162  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:29.890171  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:29.894353  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:30.390136  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:30.390164  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:30.390176  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:30.390181  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:30.393957  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:30.890420  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:30.890442  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:30.890458  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:30.890462  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:30.895075  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:30.895862  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:31.389871  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:31.389893  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:31.389902  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:31.389907  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:31.393271  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:31.890390  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:31.890411  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:31.890419  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:31.890423  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:31.894048  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:32.389848  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:32.389870  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:32.389879  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:32.389884  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:32.393339  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:32.890299  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:32.890328  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:32.890338  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:32.890343  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:32.893810  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:33.390110  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:33.390140  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:33.390152  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:33.390157  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:33.393525  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:33.393988  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:33.890279  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:33.890305  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:33.890317  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:33.890326  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:33.894103  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:34.389629  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:34.389653  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:34.389661  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:34.389666  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:34.393423  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:34.889832  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:34.889861  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:34.889872  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:34.889878  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:34.894113  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:35.389632  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:35.389653  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:35.389661  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:35.389668  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:35.392384  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:35.890106  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:35.890141  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:35.890153  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:35.890158  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:35.893183  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:35.893799  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:36.390240  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:36.390262  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:36.390275  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:36.390280  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:36.394094  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:36.890179  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:36.890202  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:36.890211  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:36.890216  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:36.893745  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:37.389770  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:37.389795  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:37.389804  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:37.389810  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:37.393011  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:37.889970  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:37.889992  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:37.890000  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:37.890006  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:37.893447  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:37.893999  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:38.389862  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:38.389886  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:38.389894  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:38.389898  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:38.393578  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:38.889977  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:38.890002  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:38.890015  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:38.890023  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:38.894709  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:39.389961  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:39.389985  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:39.389994  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:39.389997  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:39.393445  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:39.889607  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:39.889639  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:39.889646  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:39.889650  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:39.893375  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:39.894029  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:40.389658  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:40.389687  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:40.389699  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:40.389716  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:40.393116  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:40.890100  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:40.890123  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:40.890130  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:40.890135  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:40.893347  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:41.389584  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:41.389611  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:41.389626  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:41.389630  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:41.393223  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:41.890328  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:41.890352  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:41.890361  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:41.890366  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:41.894247  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:41.894758  762988 node_ready.go:53] node "ha-525790-m03" has status "Ready":"False"
	I0920 18:49:42.390094  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:42.390118  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:42.390125  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:42.390129  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:42.393818  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:42.890390  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:42.890413  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:42.890421  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:42.890426  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:42.893913  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:43.390304  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:43.390325  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.390334  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.390338  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.393629  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:43.394194  762988 node_ready.go:49] node "ha-525790-m03" has status "Ready":"True"
	I0920 18:49:43.394215  762988 node_ready.go:38] duration metric: took 14.504859113s for node "ha-525790-m03" to be "Ready" ...
	I0920 18:49:43.394227  762988 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:49:43.394317  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:43.394332  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.394342  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.394349  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.399934  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:49:43.406601  762988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.406680  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nfnkj
	I0920 18:49:43.406688  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.406695  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.406698  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.409686  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.410357  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:43.410375  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.410382  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.410387  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.413203  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.414003  762988 pod_ready.go:93] pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.414026  762988 pod_ready.go:82] duration metric: took 7.399649ms for pod "coredns-7c65d6cfc9-nfnkj" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.414037  762988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.414110  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rpcds
	I0920 18:49:43.414120  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.414132  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.414139  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.416709  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.417387  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:43.417403  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.417411  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.417414  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.419923  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.420442  762988 pod_ready.go:93] pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.420459  762988 pod_ready.go:82] duration metric: took 6.41605ms for pod "coredns-7c65d6cfc9-rpcds" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.420467  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.420515  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790
	I0920 18:49:43.420523  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.420529  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.420533  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.422830  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.423442  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:43.423459  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.423470  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.423476  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.425740  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.426292  762988 pod_ready.go:93] pod "etcd-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.426309  762988 pod_ready.go:82] duration metric: took 5.837018ms for pod "etcd-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.426318  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.426372  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790-m02
	I0920 18:49:43.426378  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.426385  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.426392  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.428740  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.429271  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:43.429289  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.429295  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.429301  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.431315  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:43.431859  762988 pod_ready.go:93] pod "etcd-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.431880  762988 pod_ready.go:82] duration metric: took 5.554102ms for pod "etcd-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.431888  762988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.591305  762988 request.go:632] Waited for 159.354613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790-m03
	I0920 18:49:43.591397  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790-m03
	I0920 18:49:43.591408  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.591418  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.591426  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.594816  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:43.790451  762988 request.go:632] Waited for 194.957771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:43.790546  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:43.790557  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.790567  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.790572  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.793782  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:43.794516  762988 pod_ready.go:93] pod "etcd-ha-525790-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:43.794545  762988 pod_ready.go:82] duration metric: took 362.651207ms for pod "etcd-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.794561  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:43.990932  762988 request.go:632] Waited for 196.293385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790
	I0920 18:49:43.991032  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790
	I0920 18:49:43.991044  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:43.991055  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:43.991070  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:43.994301  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.191298  762988 request.go:632] Waited for 196.219991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:44.191370  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:44.191378  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.191385  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.191391  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.195180  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.195974  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:44.195997  762988 pod_ready.go:82] duration metric: took 401.428334ms for pod "kube-apiserver-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.196011  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.390919  762988 request.go:632] Waited for 194.788684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m02
	I0920 18:49:44.390990  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m02
	I0920 18:49:44.390995  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.391003  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.391008  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.394492  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.591289  762988 request.go:632] Waited for 196.078558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:44.591352  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:44.591358  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.591365  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.591370  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.595290  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.596291  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:44.596314  762988 pod_ready.go:82] duration metric: took 400.296135ms for pod "kube-apiserver-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.596325  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.790722  762988 request.go:632] Waited for 194.31856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m03
	I0920 18:49:44.790804  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790-m03
	I0920 18:49:44.790810  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.790818  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.790822  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.794357  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.990524  762988 request.go:632] Waited for 195.282104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:44.990631  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:44.990644  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:44.990655  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:44.990665  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:44.994191  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:44.994903  762988 pod_ready.go:93] pod "kube-apiserver-ha-525790-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:44.994929  762988 pod_ready.go:82] duration metric: took 398.597843ms for pod "kube-apiserver-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:44.994944  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.191368  762988 request.go:632] Waited for 196.335448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790
	I0920 18:49:45.191459  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790
	I0920 18:49:45.191467  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.191475  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.191483  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.195161  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:45.391240  762988 request.go:632] Waited for 195.352512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:45.391325  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:45.391333  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.391341  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.391346  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.396237  762988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 18:49:45.397053  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:45.397069  762988 pod_ready.go:82] duration metric: took 402.117627ms for pod "kube-controller-manager-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.397080  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.590744  762988 request.go:632] Waited for 193.581272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m02
	I0920 18:49:45.590855  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m02
	I0920 18:49:45.590865  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.590877  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.590883  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.594359  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:45.791023  762988 request.go:632] Waited for 195.208519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:45.791108  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:45.791116  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.791126  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.791131  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.794779  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:45.795437  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:45.795459  762988 pod_ready.go:82] duration metric: took 398.37091ms for pod "kube-controller-manager-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.795469  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:45.990550  762988 request.go:632] Waited for 195.001281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m03
	I0920 18:49:45.990624  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-525790-m03
	I0920 18:49:45.990630  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:45.990638  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:45.990643  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:45.994052  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:46.191122  762988 request.go:632] Waited for 196.353155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:46.191247  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:46.191259  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.191268  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.191274  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.194216  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:46.194981  762988 pod_ready.go:93] pod "kube-controller-manager-ha-525790-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:46.195002  762988 pod_ready.go:82] duration metric: took 399.526934ms for pod "kube-controller-manager-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.195013  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-958jz" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.390922  762988 request.go:632] Waited for 195.832956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-958jz
	I0920 18:49:46.391009  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-958jz
	I0920 18:49:46.391020  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.391029  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.391035  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.394008  762988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 18:49:46.591177  762988 request.go:632] Waited for 196.363553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:46.591252  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:46.591257  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.591267  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.591274  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.594463  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:46.595077  762988 pod_ready.go:93] pod "kube-proxy-958jz" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:46.595099  762988 pod_ready.go:82] duration metric: took 400.079203ms for pod "kube-proxy-958jz" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.595109  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dx9pg" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.791219  762988 request.go:632] Waited for 195.994883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dx9pg
	I0920 18:49:46.791280  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dx9pg
	I0920 18:49:46.791285  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.791294  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.791299  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.794750  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:46.990905  762988 request.go:632] Waited for 195.399247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:46.990977  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:46.990982  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:46.990990  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:46.990998  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:46.994578  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:46.995251  762988 pod_ready.go:93] pod "kube-proxy-dx9pg" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:46.995275  762988 pod_ready.go:82] duration metric: took 400.160371ms for pod "kube-proxy-dx9pg" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:46.995288  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sspfs" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.191109  762988 request.go:632] Waited for 195.732991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sspfs
	I0920 18:49:47.191198  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sspfs
	I0920 18:49:47.191209  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.191220  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.191229  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.194285  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:47.390397  762988 request.go:632] Waited for 195.278961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:47.390485  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:47.390494  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.390502  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.390509  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.394123  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:47.394634  762988 pod_ready.go:93] pod "kube-proxy-sspfs" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:47.394658  762988 pod_ready.go:82] duration metric: took 399.362351ms for pod "kube-proxy-sspfs" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.394668  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.590688  762988 request.go:632] Waited for 195.932452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790
	I0920 18:49:47.590750  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790
	I0920 18:49:47.590756  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.590766  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.590773  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.594088  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:47.791044  762988 request.go:632] Waited for 196.393517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:47.791127  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790
	I0920 18:49:47.791137  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.791151  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.791160  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.794795  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:47.795601  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:47.795620  762988 pod_ready.go:82] duration metric: took 400.94539ms for pod "kube-scheduler-ha-525790" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.795629  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:47.990769  762988 request.go:632] Waited for 195.033171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m02
	I0920 18:49:47.990860  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m02
	I0920 18:49:47.990871  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:47.990883  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:47.990894  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:47.994202  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.191063  762988 request.go:632] Waited for 196.257455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:48.191127  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m02
	I0920 18:49:48.191134  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.191144  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.191149  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.194376  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.194886  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:48.194906  762988 pod_ready.go:82] duration metric: took 399.270985ms for pod "kube-scheduler-ha-525790-m02" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:48.194915  762988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:48.390935  762988 request.go:632] Waited for 195.938247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m03
	I0920 18:49:48.391011  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-525790-m03
	I0920 18:49:48.391029  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.391064  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.391074  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.394097  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.591276  762988 request.go:632] Waited for 196.398543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:48.591340  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes/ha-525790-m03
	I0920 18:49:48.591351  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.591359  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.591363  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.594456  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.595126  762988 pod_ready.go:93] pod "kube-scheduler-ha-525790-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 18:49:48.595147  762988 pod_ready.go:82] duration metric: took 400.225521ms for pod "kube-scheduler-ha-525790-m03" in "kube-system" namespace to be "Ready" ...
	I0920 18:49:48.595159  762988 pod_ready.go:39] duration metric: took 5.200916863s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:49:48.595173  762988 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:49:48.595224  762988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:49:48.611081  762988 api_server.go:72] duration metric: took 20.126887425s to wait for apiserver process to appear ...
	I0920 18:49:48.611105  762988 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:49:48.611130  762988 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0920 18:49:48.616371  762988 api_server.go:279] https://192.168.39.149:8443/healthz returned 200:
	ok
	I0920 18:49:48.616442  762988 round_trippers.go:463] GET https://192.168.39.149:8443/version
	I0920 18:49:48.616450  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.616461  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.616470  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.617373  762988 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 18:49:48.617437  762988 api_server.go:141] control plane version: v1.31.1
	I0920 18:49:48.617451  762988 api_server.go:131] duration metric: took 6.339029ms to wait for apiserver health ...
	I0920 18:49:48.617458  762988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:49:48.790943  762988 request.go:632] Waited for 173.409092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:48.791019  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:48.791024  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.791031  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.791035  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.799193  762988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 18:49:48.807423  762988 system_pods.go:59] 24 kube-system pods found
	I0920 18:49:48.807457  762988 system_pods.go:61] "coredns-7c65d6cfc9-nfnkj" [7994989d-6bfa-4d25-b7b7-662d2e6c742c] Running
	I0920 18:49:48.807464  762988 system_pods.go:61] "coredns-7c65d6cfc9-rpcds" [7db58219-7147-4a45-b233-ef3c698566ef] Running
	I0920 18:49:48.807470  762988 system_pods.go:61] "etcd-ha-525790" [f23cd40e-ac8d-451b-9bf9-2ef5d62ef4b6] Running
	I0920 18:49:48.807476  762988 system_pods.go:61] "etcd-ha-525790-m02" [5a29103e-6da3-40d1-be3c-58fdc0f28b54] Running
	I0920 18:49:48.807480  762988 system_pods.go:61] "etcd-ha-525790-m03" [33df920f-e346-4613-af3b-67042a9db421] Running
	I0920 18:49:48.807485  762988 system_pods.go:61] "kindnet-8glgp" [f462782e-1ff6-410a-8359-de3360d380b0] Running
	I0920 18:49:48.807489  762988 system_pods.go:61] "kindnet-9qbm6" [87e8ae18-a561-48ec-9835-27446b6917d3] Running
	I0920 18:49:48.807493  762988 system_pods.go:61] "kindnet-j5mmq" [9ecd60f9-bfbf-4292-8449-869dd3afa02c] Running
	I0920 18:49:48.807498  762988 system_pods.go:61] "kube-apiserver-ha-525790" [0e3563fd-5185-4dc6-8d9b-a7d954b96c8d] Running
	I0920 18:49:48.807503  762988 system_pods.go:61] "kube-apiserver-ha-525790-m02" [b3966e2e-ce3d-4916-b73c-0d80cd1793f0] Running
	I0920 18:49:48.807508  762988 system_pods.go:61] "kube-apiserver-ha-525790-m03" [7649543a-3c54-4627-8a0a-bc1945712ad7] Running
	I0920 18:49:48.807514  762988 system_pods.go:61] "kube-controller-manager-ha-525790" [1d695853-6a7e-487d-a52b-9aceb1fc9ff3] Running
	I0920 18:49:48.807519  762988 system_pods.go:61] "kube-controller-manager-ha-525790-m02" [090c1833-3800-4e13-b9a7-c03680f3d55d] Running
	I0920 18:49:48.807524  762988 system_pods.go:61] "kube-controller-manager-ha-525790-m03" [5e675da3-2dd4-417a-a6f8-d4fe90da0ac0] Running
	I0920 18:49:48.807529  762988 system_pods.go:61] "kube-proxy-958jz" [46603403-eb82-4f15-a1da-da62194a072f] Running
	I0920 18:49:48.807535  762988 system_pods.go:61] "kube-proxy-dx9pg" [aa873f4e-a8f0-49ab-95e9-d81d15b650f5] Running
	I0920 18:49:48.807543  762988 system_pods.go:61] "kube-proxy-sspfs" [15203515-fc45-4624-b97e-8ec247f01e2d] Running
	I0920 18:49:48.807550  762988 system_pods.go:61] "kube-scheduler-ha-525790" [8cb7e23e-c1d1-4753-9758-b17ef9fd08d7] Running
	I0920 18:49:48.807556  762988 system_pods.go:61] "kube-scheduler-ha-525790-m02" [dc9a5561-5d41-445d-a0ba-de3b2405f821] Running
	I0920 18:49:48.807562  762988 system_pods.go:61] "kube-scheduler-ha-525790-m03" [729fa556-4301-49a9-8ed0-506ecb3a8b76] Running
	I0920 18:49:48.807567  762988 system_pods.go:61] "kube-vip-ha-525790" [0b318b1e-7a85-4c8c-8a5a-2fee226d7702] Running
	I0920 18:49:48.807576  762988 system_pods.go:61] "kube-vip-ha-525790-m02" [f2316231-5c1d-4bf2-ae62-5a4202b5818b] Running
	I0920 18:49:48.807581  762988 system_pods.go:61] "kube-vip-ha-525790-m03" [3050094c-de2a-449f-866c-0e8ddceb697d] Running
	I0920 18:49:48.807587  762988 system_pods.go:61] "storage-provisioner" [ea6bf34f-c1f7-4216-a61f-be30846c991b] Running
	I0920 18:49:48.807599  762988 system_pods.go:74] duration metric: took 190.132126ms to wait for pod list to return data ...
	I0920 18:49:48.807613  762988 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:49:48.991230  762988 request.go:632] Waited for 183.520385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:49:48.991298  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/default/serviceaccounts
	I0920 18:49:48.991305  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:48.991315  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:48.991320  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:48.994457  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:48.994600  762988 default_sa.go:45] found service account: "default"
	I0920 18:49:48.994616  762988 default_sa.go:55] duration metric: took 186.997115ms for default service account to be created ...
	I0920 18:49:48.994626  762988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:49:49.191090  762988 request.go:632] Waited for 196.382893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:49.191150  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/namespaces/kube-system/pods
	I0920 18:49:49.191156  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:49.191167  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:49.191172  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:49.196609  762988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 18:49:49.203953  762988 system_pods.go:86] 24 kube-system pods found
	I0920 18:49:49.203984  762988 system_pods.go:89] "coredns-7c65d6cfc9-nfnkj" [7994989d-6bfa-4d25-b7b7-662d2e6c742c] Running
	I0920 18:49:49.203991  762988 system_pods.go:89] "coredns-7c65d6cfc9-rpcds" [7db58219-7147-4a45-b233-ef3c698566ef] Running
	I0920 18:49:49.203997  762988 system_pods.go:89] "etcd-ha-525790" [f23cd40e-ac8d-451b-9bf9-2ef5d62ef4b6] Running
	I0920 18:49:49.204001  762988 system_pods.go:89] "etcd-ha-525790-m02" [5a29103e-6da3-40d1-be3c-58fdc0f28b54] Running
	I0920 18:49:49.204005  762988 system_pods.go:89] "etcd-ha-525790-m03" [33df920f-e346-4613-af3b-67042a9db421] Running
	I0920 18:49:49.204010  762988 system_pods.go:89] "kindnet-8glgp" [f462782e-1ff6-410a-8359-de3360d380b0] Running
	I0920 18:49:49.204015  762988 system_pods.go:89] "kindnet-9qbm6" [87e8ae18-a561-48ec-9835-27446b6917d3] Running
	I0920 18:49:49.204020  762988 system_pods.go:89] "kindnet-j5mmq" [9ecd60f9-bfbf-4292-8449-869dd3afa02c] Running
	I0920 18:49:49.204026  762988 system_pods.go:89] "kube-apiserver-ha-525790" [0e3563fd-5185-4dc6-8d9b-a7d954b96c8d] Running
	I0920 18:49:49.204033  762988 system_pods.go:89] "kube-apiserver-ha-525790-m02" [b3966e2e-ce3d-4916-b73c-0d80cd1793f0] Running
	I0920 18:49:49.204042  762988 system_pods.go:89] "kube-apiserver-ha-525790-m03" [7649543a-3c54-4627-8a0a-bc1945712ad7] Running
	I0920 18:49:49.204048  762988 system_pods.go:89] "kube-controller-manager-ha-525790" [1d695853-6a7e-487d-a52b-9aceb1fc9ff3] Running
	I0920 18:49:49.204061  762988 system_pods.go:89] "kube-controller-manager-ha-525790-m02" [090c1833-3800-4e13-b9a7-c03680f3d55d] Running
	I0920 18:49:49.204067  762988 system_pods.go:89] "kube-controller-manager-ha-525790-m03" [5e675da3-2dd4-417a-a6f8-d4fe90da0ac0] Running
	I0920 18:49:49.204073  762988 system_pods.go:89] "kube-proxy-958jz" [46603403-eb82-4f15-a1da-da62194a072f] Running
	I0920 18:49:49.204081  762988 system_pods.go:89] "kube-proxy-dx9pg" [aa873f4e-a8f0-49ab-95e9-d81d15b650f5] Running
	I0920 18:49:49.204086  762988 system_pods.go:89] "kube-proxy-sspfs" [15203515-fc45-4624-b97e-8ec247f01e2d] Running
	I0920 18:49:49.204093  762988 system_pods.go:89] "kube-scheduler-ha-525790" [8cb7e23e-c1d1-4753-9758-b17ef9fd08d7] Running
	I0920 18:49:49.204097  762988 system_pods.go:89] "kube-scheduler-ha-525790-m02" [dc9a5561-5d41-445d-a0ba-de3b2405f821] Running
	I0920 18:49:49.204103  762988 system_pods.go:89] "kube-scheduler-ha-525790-m03" [729fa556-4301-49a9-8ed0-506ecb3a8b76] Running
	I0920 18:49:49.204107  762988 system_pods.go:89] "kube-vip-ha-525790" [0b318b1e-7a85-4c8c-8a5a-2fee226d7702] Running
	I0920 18:49:49.204115  762988 system_pods.go:89] "kube-vip-ha-525790-m02" [f2316231-5c1d-4bf2-ae62-5a4202b5818b] Running
	I0920 18:49:49.204121  762988 system_pods.go:89] "kube-vip-ha-525790-m03" [3050094c-de2a-449f-866c-0e8ddceb697d] Running
	I0920 18:49:49.204127  762988 system_pods.go:89] "storage-provisioner" [ea6bf34f-c1f7-4216-a61f-be30846c991b] Running
	I0920 18:49:49.204137  762988 system_pods.go:126] duration metric: took 209.50314ms to wait for k8s-apps to be running ...
	I0920 18:49:49.204149  762988 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:49:49.204205  762988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:49:49.220678  762988 system_svc.go:56] duration metric: took 16.519226ms WaitForService to wait for kubelet
	I0920 18:49:49.220713  762988 kubeadm.go:582] duration metric: took 20.736522024s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:49:49.220737  762988 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:49:49.391073  762988 request.go:632] Waited for 170.223638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.149:8443/api/v1/nodes
	I0920 18:49:49.391144  762988 round_trippers.go:463] GET https://192.168.39.149:8443/api/v1/nodes
	I0920 18:49:49.391152  762988 round_trippers.go:469] Request Headers:
	I0920 18:49:49.391163  762988 round_trippers.go:473]     Accept: application/json, */*
	I0920 18:49:49.391185  762988 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 18:49:49.395131  762988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 18:49:49.396058  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:49:49.396082  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:49:49.396097  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:49:49.396102  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:49:49.396107  762988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 18:49:49.396112  762988 node_conditions.go:123] node cpu capacity is 2
	I0920 18:49:49.396118  762988 node_conditions.go:105] duration metric: took 175.374616ms to run NodePressure ...
	I0920 18:49:49.396133  762988 start.go:241] waiting for startup goroutines ...
	I0920 18:49:49.396165  762988 start.go:255] writing updated cluster config ...
	I0920 18:49:49.396463  762988 ssh_runner.go:195] Run: rm -f paused
	I0920 18:49:49.451056  762988 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:49:49.453054  762988 out.go:177] * Done! kubectl is now configured to use "ha-525790" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.635592223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858417635569361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33362e7c-a034-404a-918a-1e0e2fb0a925 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.636214457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7769192d-ce79-4ce1-a543-46fcce53bdf0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.636313288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7769192d-ce79-4ce1-a543-46fcce53bdf0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.636652625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858192106122080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90,PodSandboxId:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858057039915142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056980739363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056983536331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6b
fa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268580
44669106744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858044313140306,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc,PodSandboxId:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858035446566658,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb,PodSandboxId:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858033110408572,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858033123239626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858033076459540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72,PodSandboxId:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858033054127944,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7769192d-ce79-4ce1-a543-46fcce53bdf0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.678384497Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21c42895-25d6-47a2-b803-c63f652be591 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.678465796Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21c42895-25d6-47a2-b803-c63f652be591 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.679738802Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8b06bc0-3b67-4965-83bc-689aa3fdfae1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.680189935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858417680165550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8b06bc0-3b67-4965-83bc-689aa3fdfae1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.680989736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95bd8533-399c-4f23-a662-83f66f1e8d66 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.681043702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95bd8533-399c-4f23-a662-83f66f1e8d66 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.681373756Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858192106122080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90,PodSandboxId:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858057039915142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056980739363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056983536331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6b
fa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268580
44669106744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858044313140306,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc,PodSandboxId:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858035446566658,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb,PodSandboxId:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858033110408572,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858033123239626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858033076459540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72,PodSandboxId:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858033054127944,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95bd8533-399c-4f23-a662-83f66f1e8d66 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.719707084Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c89a05f6-28e0-4cb9-b768-03e7b963e152 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.719803150Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c89a05f6-28e0-4cb9-b768-03e7b963e152 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.724572049Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c70148d0-4594-4c54-b50c-36927e5932c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.725024879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858417724999727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c70148d0-4594-4c54-b50c-36927e5932c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.726040709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e593d760-9394-4f2a-b8d8-fdb52a1be627 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.726095583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e593d760-9394-4f2a-b8d8-fdb52a1be627 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.726407579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858192106122080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90,PodSandboxId:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858057039915142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056980739363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056983536331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6b
fa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268580
44669106744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858044313140306,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc,PodSandboxId:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858035446566658,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb,PodSandboxId:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858033110408572,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858033123239626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858033076459540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72,PodSandboxId:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858033054127944,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e593d760-9394-4f2a-b8d8-fdb52a1be627 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.764872234Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d328a6d9-84b8-49ed-b829-77bf2e903bd3 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.764965429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d328a6d9-84b8-49ed-b829-77bf2e903bd3 name=/runtime.v1.RuntimeService/Version
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.766246372Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56d1e53e-ac43-41e4-8d1a-95bfdeabe3e1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.766727318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858417766702451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56d1e53e-ac43-41e4-8d1a-95bfdeabe3e1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.767406892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d32b394e-f0ac-459c-b9b2-4fb6a819ff90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.767472584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d32b394e-f0ac-459c-b9b2-4fb6a819ff90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 18:53:37 ha-525790 crio[657]: time="2024-09-20 18:53:37.767787797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858192106122080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90,PodSandboxId:f2f3faeb3feb37731a72146ab0e2730c2f00b0a64c288e6aa139840b8d1852b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858057039915142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056980739363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858056983536331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6b
fa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268580
44669106744,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858044313140306,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc,PodSandboxId:afc309e0288a67308501f446405f65d8615c4060f819039947aff5f12a4b1be9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858035446566658,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede9a5fdac3bc6f58bd35cff44d56d88,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb,PodSandboxId:4ed8fcb6c51972392f851f91d41ef974ee35c8b05f66d02ba0fbacb37d072738,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858033110408572,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858033123239626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858033076459540,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72,PodSandboxId:ee2f4d881a4246f1bf78be961d0510d0f0774b7bcb9c2febc0c3568a63704973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858033054127944,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d32b394e-f0ac-459c-b9b2-4fb6a819ff90 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	344b03b51dddb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   125671e39b996       busybox-7dff88458-z26jr
	57fdde7a007ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   f2f3faeb3feb3       storage-provisioner
	172e8f75d2a84       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   5dbd6acffd5c5       coredns-7c65d6cfc9-nfnkj
	3dff404b6ad2a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   34517f9f64c86       coredns-7c65d6cfc9-rpcds
	5579930bef0fc       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   64136f65f6d34       kindnet-9qbm6
	3d469134674c2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   2e440a5ac73b7       kube-proxy-958jz
	c704a3be19bcb       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   afc309e0288a6       kube-vip-ha-525790
	7d0496391eb85       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   fae09dfcf3d6f       kube-scheduler-ha-525790
	1196adfd11996       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   4ed8fcb6c5197       kube-apiserver-ha-525790
	bcca29b119984       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   17818940c2036       etcd-ha-525790
	49582cb9e0724       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   ee2f4d881a424       kube-controller-manager-ha-525790
	
	
	==> coredns [172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1] <==
	[INFO] 10.244.0.4:52678 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000127756s
	[INFO] 10.244.1.2:49868 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196016s
	[INFO] 10.244.1.2:54874 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00387198s
	[INFO] 10.244.1.2:39870 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203758s
	[INFO] 10.244.1.2:47679 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000185456s
	[INFO] 10.244.1.2:49534 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164113s
	[INFO] 10.244.2.2:50032 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167479s
	[INFO] 10.244.2.2:33413 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001865571s
	[INFO] 10.244.0.4:38374 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010475s
	[INFO] 10.244.0.4:44676 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170058s
	[INFO] 10.244.0.4:54182 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123082s
	[INFO] 10.244.0.4:52067 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108075s
	[INFO] 10.244.1.2:36885 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133944s
	[INFO] 10.244.2.2:48327 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127372s
	[INFO] 10.244.2.2:52262 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160755s
	[INFO] 10.244.0.4:44171 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111758s
	[INFO] 10.244.1.2:36220 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196033s
	[INFO] 10.244.1.2:33859 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222322s
	[INFO] 10.244.1.2:55349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158431s
	[INFO] 10.244.2.2:37976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138385s
	[INFO] 10.244.2.2:56378 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000191303s
	[INFO] 10.244.2.2:54246 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117607s
	[INFO] 10.244.0.4:53115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116565s
	[INFO] 10.244.0.4:49608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095821s
	[INFO] 10.244.0.4:60862 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111997s
	
	
	==> coredns [3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e] <==
	[INFO] 10.244.0.4:45127 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001808433s
	[INFO] 10.244.1.2:43604 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003790448s
	[INFO] 10.244.1.2:40634 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000273503s
	[INFO] 10.244.1.2:53633 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177331s
	[INFO] 10.244.2.2:45376 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253726s
	[INFO] 10.244.2.2:42750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000311517s
	[INFO] 10.244.2.2:42748 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319529s
	[INFO] 10.244.2.2:49203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190348s
	[INFO] 10.244.2.2:44849 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019366s
	[INFO] 10.244.2.2:52186 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103082s
	[INFO] 10.244.0.4:58300 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140735s
	[INFO] 10.244.0.4:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001702673s
	[INFO] 10.244.0.4:33721 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001170599s
	[INFO] 10.244.0.4:42180 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061647s
	[INFO] 10.244.1.2:49177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333372s
	[INFO] 10.244.1.2:57192 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147894s
	[INFO] 10.244.1.2:59125 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095482s
	[INFO] 10.244.2.2:50879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019818s
	[INFO] 10.244.2.2:47467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096359s
	[INFO] 10.244.0.4:54464 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087148s
	[INFO] 10.244.0.4:40326 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011895s
	[INFO] 10.244.0.4:46142 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071583s
	[INFO] 10.244.1.2:50168 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000224622s
	[INFO] 10.244.2.2:50611 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117577s
	[INFO] 10.244.0.4:57391 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000320119s
	
	
	==> describe nodes <==
	Name:               ha-525790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_47_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:47:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:53:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:50:23 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:50:23 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:50:23 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:50:23 +0000   Fri, 20 Sep 2024 18:47:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-525790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3f2b96a8819496a94e034cf4adf7a85
	  System UUID:                d3f2b96a-8819-496a-94e0-34cf4adf7a85
	  Boot ID:                    02f79ecd-567f-4683-83ce-59afb46feab6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z26jr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 coredns-7c65d6cfc9-nfnkj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m14s
	  kube-system                 coredns-7c65d6cfc9-rpcds             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m14s
	  kube-system                 etcd-ha-525790                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m19s
	  kube-system                 kindnet-9qbm6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m15s
	  kube-system                 kube-apiserver-ha-525790             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-controller-manager-ha-525790    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-958jz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-scheduler-ha-525790             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-vip-ha-525790                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m13s  kube-proxy       
	  Normal  Starting                 6m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s  kubelet          Node ha-525790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s  kubelet          Node ha-525790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s  kubelet          Node ha-525790 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m15s  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal  NodeReady                6m2s   kubelet          Node ha-525790 status is now: NodeReady
	  Normal  RegisteredNode           5m16s  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal  RegisteredNode           4m5s   node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	
	
	Name:               ha-525790-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_48_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:48:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:50:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 18:50:16 +0000   Fri, 20 Sep 2024 18:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 18:50:16 +0000   Fri, 20 Sep 2024 18:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 18:50:16 +0000   Fri, 20 Sep 2024 18:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 18:50:16 +0000   Fri, 20 Sep 2024 18:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-525790-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dbde4511fc24bbcb1281f7b7d6ff24f
	  System UUID:                1dbde451-1fc2-4bbc-b128-1f7b7d6ff24f
	  Boot ID:                    9ec76d35-ca9a-483c-b479-9d99ec8feedc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7jtss                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 etcd-ha-525790-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m23s
	  kube-system                 kindnet-8glgp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m25s
	  kube-system                 kube-apiserver-ha-525790-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-controller-manager-ha-525790-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-proxy-sspfs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-scheduler-ha-525790-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-vip-ha-525790-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-525790-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-525790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-525790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  NodeNotReady             2m                     node-controller  Node ha-525790-m02 status is now: NodeNotReady
	
	
	Name:               ha-525790-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_49_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:49:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:53:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:49:54 +0000   Fri, 20 Sep 2024 18:49:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:49:54 +0000   Fri, 20 Sep 2024 18:49:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:49:54 +0000   Fri, 20 Sep 2024 18:49:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:49:54 +0000   Fri, 20 Sep 2024 18:49:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-525790-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 007556c5fa674bcd927152e3b0cca9b2
	  System UUID:                007556c5-fa67-4bcd-9271-52e3b0cca9b2
	  Boot ID:                    2d4db773-7cb0-4bef-b28d-d6863649acb9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jmx4g                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 etcd-ha-525790-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m12s
	  kube-system                 kindnet-j5mmq                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m14s
	  kube-system                 kube-apiserver-ha-525790-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-controller-manager-ha-525790-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-proxy-dx9pg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-scheduler-ha-525790-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-vip-ha-525790-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m15s)  kubelet          Node ha-525790-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m15s)  kubelet          Node ha-525790-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x7 over 4m15s)  kubelet          Node ha-525790-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	
	
	Name:               ha-525790-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_50_26_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:50:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:53:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:50:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:50:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:50:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:50:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-525790-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c58d814e5e5d49b699d9f977eb54ff58
	  System UUID:                c58d814e-5e5d-49b6-99d9-f977eb54ff58
	  Boot ID:                    69924ac5-b6f2-4ddd-bd0d-fa3c683681d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-df8hf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m12s
	  kube-system                 kube-proxy-w98cx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m12s (x2 over 3m13s)  kubelet          Node ha-525790-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m12s (x2 over 3m13s)  kubelet          Node ha-525790-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s (x2 over 3m13s)  kubelet          Node ha-525790-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-525790-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 18:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049615] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041335] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.781215] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.493789] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.593513] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep20 18:47] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.053987] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058272] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.180542] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.143015] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.280287] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.923962] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +3.905808] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.064972] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.290695] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.091789] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.472602] kauditd_printk_skb: 36 callbacks suppressed
	[ +11.974718] kauditd_printk_skb: 23 callbacks suppressed
	[Sep20 18:48] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93] <==
	{"level":"warn","ts":"2024-09-20T18:53:38.063171Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.070661Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.075487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.076546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.087019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.093358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.100557Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.104659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.105595Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.109707Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.110413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.116434Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.122830Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.128448Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.133197Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.136733Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.142504Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.151481Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.157473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.161378Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.164116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.168414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.173931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.175757Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T18:53:38.179903Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ba3e3e863cacc4d","from":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:53:38 up 6 min,  0 users,  load average: 0.69, 0.29, 0.13
	Linux ha-525790 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98] <==
	I0920 18:53:05.885842       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 18:53:15.886307       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:53:15.886406       1 main.go:299] handling current node
	I0920 18:53:15.886461       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:53:15.886488       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:53:15.886631       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:53:15.886653       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:53:15.886712       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:53:15.886731       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 18:53:25.880388       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:53:25.880418       1 main.go:299] handling current node
	I0920 18:53:25.880431       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:53:25.880437       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:53:25.880623       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:53:25.880629       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:53:25.880667       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:53:25.880672       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 18:53:35.886405       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:53:35.886622       1 main.go:299] handling current node
	I0920 18:53:35.886681       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:53:35.886701       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:53:35.886897       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:53:35.886917       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:53:35.886971       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:53:35.886989       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb] <==
	W0920 18:47:18.009766       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149]
	I0920 18:47:18.010784       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:47:18.015641       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:47:18.249854       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 18:47:19.683867       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 18:47:19.709897       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 18:47:19.867045       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 18:47:23.355786       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0920 18:47:23.802179       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0920 18:49:53.563053       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46192: use of closed network connection
	E0920 18:49:53.772052       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46208: use of closed network connection
	E0920 18:49:53.971905       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46230: use of closed network connection
	E0920 18:49:54.183484       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46258: use of closed network connection
	E0920 18:49:54.358996       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46282: use of closed network connection
	E0920 18:49:54.568631       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46306: use of closed network connection
	E0920 18:49:54.751815       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46320: use of closed network connection
	E0920 18:49:54.931094       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46346: use of closed network connection
	E0920 18:49:55.134164       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46362: use of closed network connection
	E0920 18:49:55.422343       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46396: use of closed network connection
	E0920 18:49:55.606742       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46420: use of closed network connection
	E0920 18:49:55.788879       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46442: use of closed network connection
	E0920 18:49:55.968453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46450: use of closed network connection
	E0920 18:49:56.152146       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46460: use of closed network connection
	E0920 18:49:56.335452       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46464: use of closed network connection
	W0920 18:51:07.982250       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.105 192.168.39.149]
	
	
	==> kube-controller-manager [49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72] <==
	I0920 18:50:26.211532       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-525790-m04" podCIDRs=["10.244.3.0/24"]
	I0920 18:50:26.211587       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:26.211616       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:26.225025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:26.521754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:26.959047       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:27.339450       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:28.189228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:28.189762       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-525790-m04"
	I0920 18:50:28.268460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:28.721421       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:28.749109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:36.536189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:44.973968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:44.974514       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-525790-m04"
	I0920 18:50:44.992518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:47.269588       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:50:57.000828       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:51:38.216141       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-525790-m04"
	I0920 18:51:38.216594       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 18:51:38.240377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 18:51:38.269433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.448755ms"
	I0920 18:51:38.269538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.42µs"
	I0920 18:51:38.804819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 18:51:43.466404       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	
	
	==> kube-proxy [3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:47:24.817372       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 18:47:24.843820       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.149"]
	E0920 18:47:24.843948       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:47:24.955225       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:47:24.955317       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:47:24.955347       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:47:24.958548       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:47:24.959874       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:47:24.959905       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:47:24.962813       1 config.go:199] "Starting service config controller"
	I0920 18:47:24.965782       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:47:24.965817       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:47:24.968165       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:47:24.968295       1 config.go:328] "Starting node config controller"
	I0920 18:47:24.968302       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:47:25.067459       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:47:25.068474       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:47:25.068496       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706] <==
	E0920 18:49:50.397182       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jmx4g\": pod busybox-7dff88458-jmx4g is already assigned to node \"ha-525790-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jmx4g" node="ha-525790-m03"
	E0920 18:49:50.397248       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 223d79ec-368f-47a1-aa7b-26d153195e57(default/busybox-7dff88458-jmx4g) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-jmx4g"
	E0920 18:49:50.397330       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jmx4g\": pod busybox-7dff88458-jmx4g is already assigned to node \"ha-525790-m03\"" pod="default/busybox-7dff88458-jmx4g"
	I0920 18:49:50.397369       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jmx4g" node="ha-525790-m03"
	E0920 18:49:50.409140       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z26jr\": pod busybox-7dff88458-z26jr is already assigned to node \"ha-525790\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-z26jr" node="ha-525790"
	E0920 18:49:50.409195       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3a3cda3d-ccab-4483-98e6-50d779cc3354(default/busybox-7dff88458-z26jr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-z26jr"
	E0920 18:49:50.409213       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z26jr\": pod busybox-7dff88458-z26jr is already assigned to node \"ha-525790\"" pod="default/busybox-7dff88458-z26jr"
	I0920 18:49:50.409243       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-z26jr" node="ha-525790"
	E0920 18:49:50.532066       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-pt85x is already present in the active queue" pod="default/busybox-7dff88458-pt85x"
	E0920 18:50:26.262797       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fz5b4\": pod kindnet-fz5b4 is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fz5b4" node="ha-525790-m04"
	E0920 18:50:26.262881       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e8309f8d-3b06-4e9f-9bad-e0745dd2b30c(kube-system/kindnet-fz5b4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fz5b4"
	E0920 18:50:26.262903       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fz5b4\": pod kindnet-fz5b4 is already assigned to node \"ha-525790-m04\"" pod="kube-system/kindnet-fz5b4"
	I0920 18:50:26.262924       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fz5b4" node="ha-525790-m04"
	E0920 18:50:26.263223       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-w98cx\": pod kube-proxy-w98cx is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-w98cx" node="ha-525790-m04"
	E0920 18:50:26.263412       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cd3e68cf-e7ed-47fc-ae4b-c701394a8c1f(kube-system/kube-proxy-w98cx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-w98cx"
	E0920 18:50:26.263548       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-w98cx\": pod kube-proxy-w98cx is already assigned to node \"ha-525790-m04\"" pod="kube-system/kube-proxy-w98cx"
	I0920 18:50:26.263699       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w98cx" node="ha-525790-m04"
	E0920 18:50:26.297985       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298064       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9ff40332-cdad-4e9f-99ca-28d1271713a8(kube-system/kindnet-hwgsh) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-hwgsh"
	E0920 18:50:26.298079       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" pod="kube-system/kindnet-hwgsh"
	I0920 18:50:26.298095       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298461       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	E0920 18:50:26.298512       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 340d5abf-2e79-4cc0-8f1f-130c1e176259(kube-system/kube-proxy-rh89s) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rh89s"
	E0920 18:50:26.298529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" pod="kube-system/kube-proxy-rh89s"
	I0920 18:50:26.298548       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	
	
	==> kubelet <==
	Sep 20 18:52:19 ha-525790 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:52:19 ha-525790 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:52:19 ha-525790 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:52:19 ha-525790 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:52:19 ha-525790 kubelet[1305]: E0920 18:52:19.763018    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858339762333652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:19 ha-525790 kubelet[1305]: E0920 18:52:19.763043    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858339762333652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:29 ha-525790 kubelet[1305]: E0920 18:52:29.765356    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858349764077984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:29 ha-525790 kubelet[1305]: E0920 18:52:29.765949    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858349764077984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:39 ha-525790 kubelet[1305]: E0920 18:52:39.767540    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858359766941786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:39 ha-525790 kubelet[1305]: E0920 18:52:39.767585    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858359766941786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:49 ha-525790 kubelet[1305]: E0920 18:52:49.770639    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858369770138616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:49 ha-525790 kubelet[1305]: E0920 18:52:49.770662    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858369770138616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:59 ha-525790 kubelet[1305]: E0920 18:52:59.772133    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858379771879043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:52:59 ha-525790 kubelet[1305]: E0920 18:52:59.772178    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858379771879043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:09 ha-525790 kubelet[1305]: E0920 18:53:09.773633    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858389773387666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:09 ha-525790 kubelet[1305]: E0920 18:53:09.773655    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858389773387666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:19 ha-525790 kubelet[1305]: E0920 18:53:19.642578    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 18:53:19 ha-525790 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 18:53:19 ha-525790 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 18:53:19 ha-525790 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 18:53:19 ha-525790 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 18:53:19 ha-525790 kubelet[1305]: E0920 18:53:19.776518    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858399775795306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:19 ha-525790 kubelet[1305]: E0920 18:53:19.776559    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858399775795306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:29 ha-525790 kubelet[1305]: E0920 18:53:29.780049    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858409779796104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:53:29 ha-525790 kubelet[1305]: E0920 18:53:29.780095    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726858409779796104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-525790 -n ha-525790
helpers_test.go:261: (dbg) Run:  kubectl --context ha-525790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (799.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-525790 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-525790 -v=7 --alsologtostderr
E0920 18:54:08.037170  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:54:55.904624  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-525790 -v=7 --alsologtostderr: exit status 82 (2m1.868146745s)

                                                
                                                
-- stdout --
	* Stopping node "ha-525790-m04"  ...
	* Stopping node "ha-525790-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:53:43.402244  768107 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:53:43.402525  768107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:53:43.402538  768107 out.go:358] Setting ErrFile to fd 2...
	I0920 18:53:43.402542  768107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:53:43.402750  768107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:53:43.403047  768107 out.go:352] Setting JSON to false
	I0920 18:53:43.403159  768107 mustload.go:65] Loading cluster: ha-525790
	I0920 18:53:43.403558  768107 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:43.403647  768107 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:53:43.403848  768107 mustload.go:65] Loading cluster: ha-525790
	I0920 18:53:43.403997  768107 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:43.404036  768107 stop.go:39] StopHost: ha-525790-m04
	I0920 18:53:43.404525  768107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:53:43.404569  768107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:53:43.419957  768107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
	I0920 18:53:43.420459  768107 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:53:43.421206  768107 main.go:141] libmachine: Using API Version  1
	I0920 18:53:43.421232  768107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:53:43.421679  768107 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:53:43.424311  768107 out.go:177] * Stopping node "ha-525790-m04"  ...
	I0920 18:53:43.425572  768107 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:53:43.425598  768107 main.go:141] libmachine: (ha-525790-m04) Calling .DriverName
	I0920 18:53:43.425837  768107 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:53:43.425859  768107 main.go:141] libmachine: (ha-525790-m04) Calling .GetSSHHostname
	I0920 18:53:43.428450  768107 main.go:141] libmachine: (ha-525790-m04) DBG | domain ha-525790-m04 has defined MAC address 52:54:00:ba:c9:a8 in network mk-ha-525790
	I0920 18:53:43.428896  768107 main.go:141] libmachine: (ha-525790-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:c9:a8", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:50:11 +0000 UTC Type:0 Mac:52:54:00:ba:c9:a8 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:ha-525790-m04 Clientid:01:52:54:00:ba:c9:a8}
	I0920 18:53:43.428917  768107 main.go:141] libmachine: (ha-525790-m04) DBG | domain ha-525790-m04 has defined IP address 192.168.39.181 and MAC address 52:54:00:ba:c9:a8 in network mk-ha-525790
	I0920 18:53:43.429074  768107 main.go:141] libmachine: (ha-525790-m04) Calling .GetSSHPort
	I0920 18:53:43.429256  768107 main.go:141] libmachine: (ha-525790-m04) Calling .GetSSHKeyPath
	I0920 18:53:43.429392  768107 main.go:141] libmachine: (ha-525790-m04) Calling .GetSSHUsername
	I0920 18:53:43.429513  768107 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m04/id_rsa Username:docker}
	I0920 18:53:43.519539  768107 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:53:43.572762  768107 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:53:43.626343  768107 main.go:141] libmachine: Stopping "ha-525790-m04"...
	I0920 18:53:43.626376  768107 main.go:141] libmachine: (ha-525790-m04) Calling .GetState
	I0920 18:53:43.627986  768107 main.go:141] libmachine: (ha-525790-m04) Calling .Stop
	I0920 18:53:43.631610  768107 main.go:141] libmachine: (ha-525790-m04) Waiting for machine to stop 0/120
	I0920 18:53:44.806923  768107 main.go:141] libmachine: (ha-525790-m04) Calling .GetState
	I0920 18:53:44.808173  768107 main.go:141] libmachine: Machine "ha-525790-m04" was stopped.
	I0920 18:53:44.808204  768107 stop.go:75] duration metric: took 1.38263058s to stop
	I0920 18:53:44.808231  768107 stop.go:39] StopHost: ha-525790-m03
	I0920 18:53:44.808713  768107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:53:44.808790  768107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:53:44.826045  768107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0920 18:53:44.826624  768107 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:53:44.827186  768107 main.go:141] libmachine: Using API Version  1
	I0920 18:53:44.827296  768107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:53:44.827673  768107 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:53:44.829766  768107 out.go:177] * Stopping node "ha-525790-m03"  ...
	I0920 18:53:44.831006  768107 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 18:53:44.831031  768107 main.go:141] libmachine: (ha-525790-m03) Calling .DriverName
	I0920 18:53:44.831279  768107 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 18:53:44.831308  768107 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHHostname
	I0920 18:53:44.833946  768107 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:53:44.834383  768107 main.go:141] libmachine: (ha-525790-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:21:86", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:48:53 +0000 UTC Type:0 Mac:52:54:00:c8:21:86 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-525790-m03 Clientid:01:52:54:00:c8:21:86}
	I0920 18:53:44.834411  768107 main.go:141] libmachine: (ha-525790-m03) DBG | domain ha-525790-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:c8:21:86 in network mk-ha-525790
	I0920 18:53:44.834578  768107 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHPort
	I0920 18:53:44.834749  768107 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHKeyPath
	I0920 18:53:44.834936  768107 main.go:141] libmachine: (ha-525790-m03) Calling .GetSSHUsername
	I0920 18:53:44.835079  768107 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m03/id_rsa Username:docker}
	I0920 18:53:44.923185  768107 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 18:53:44.976514  768107 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 18:53:45.030835  768107 main.go:141] libmachine: Stopping "ha-525790-m03"...
	I0920 18:53:45.030884  768107 main.go:141] libmachine: (ha-525790-m03) Calling .GetState
	I0920 18:53:45.032523  768107 main.go:141] libmachine: (ha-525790-m03) Calling .Stop
	I0920 18:53:45.036303  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 0/120
	I0920 18:53:46.037885  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 1/120
	I0920 18:53:47.039522  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 2/120
	I0920 18:53:48.041207  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 3/120
	I0920 18:53:49.043005  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 4/120
	I0920 18:53:50.044997  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 5/120
	I0920 18:53:51.046238  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 6/120
	I0920 18:53:52.047674  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 7/120
	I0920 18:53:53.049690  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 8/120
	I0920 18:53:54.051188  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 9/120
	I0920 18:53:55.053172  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 10/120
	I0920 18:53:56.054788  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 11/120
	I0920 18:53:57.056372  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 12/120
	I0920 18:53:58.058005  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 13/120
	I0920 18:53:59.059361  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 14/120
	I0920 18:54:00.061125  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 15/120
	I0920 18:54:01.062632  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 16/120
	I0920 18:54:02.063956  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 17/120
	I0920 18:54:03.065518  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 18/120
	I0920 18:54:04.066814  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 19/120
	I0920 18:54:05.068745  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 20/120
	I0920 18:54:06.070350  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 21/120
	I0920 18:54:07.071830  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 22/120
	I0920 18:54:08.073502  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 23/120
	I0920 18:54:09.075152  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 24/120
	I0920 18:54:10.077488  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 25/120
	I0920 18:54:11.079469  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 26/120
	I0920 18:54:12.080948  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 27/120
	I0920 18:54:13.082591  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 28/120
	I0920 18:54:14.084022  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 29/120
	I0920 18:54:15.085868  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 30/120
	I0920 18:54:16.087317  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 31/120
	I0920 18:54:17.088876  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 32/120
	I0920 18:54:18.090223  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 33/120
	I0920 18:54:19.091589  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 34/120
	I0920 18:54:20.093361  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 35/120
	I0920 18:54:21.094714  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 36/120
	I0920 18:54:22.096040  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 37/120
	I0920 18:54:23.097397  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 38/120
	I0920 18:54:24.098794  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 39/120
	I0920 18:54:25.100520  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 40/120
	I0920 18:54:26.101650  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 41/120
	I0920 18:54:27.103092  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 42/120
	I0920 18:54:28.104285  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 43/120
	I0920 18:54:29.105677  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 44/120
	I0920 18:54:30.108063  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 45/120
	I0920 18:54:31.109224  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 46/120
	I0920 18:54:32.110601  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 47/120
	I0920 18:54:33.111692  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 48/120
	I0920 18:54:34.113106  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 49/120
	I0920 18:54:35.114780  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 50/120
	I0920 18:54:36.115920  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 51/120
	I0920 18:54:37.117317  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 52/120
	I0920 18:54:38.118503  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 53/120
	I0920 18:54:39.119752  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 54/120
	I0920 18:54:40.121350  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 55/120
	I0920 18:54:41.122671  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 56/120
	I0920 18:54:42.124047  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 57/120
	I0920 18:54:43.125390  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 58/120
	I0920 18:54:44.127662  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 59/120
	I0920 18:54:45.129558  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 60/120
	I0920 18:54:46.131291  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 61/120
	I0920 18:54:47.132681  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 62/120
	I0920 18:54:48.134129  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 63/120
	I0920 18:54:49.135345  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 64/120
	I0920 18:54:50.136935  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 65/120
	I0920 18:54:51.138267  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 66/120
	I0920 18:54:52.139508  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 67/120
	I0920 18:54:53.141710  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 68/120
	I0920 18:54:54.143277  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 69/120
	I0920 18:54:55.144529  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 70/120
	I0920 18:54:56.145960  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 71/120
	I0920 18:54:57.147366  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 72/120
	I0920 18:54:58.148707  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 73/120
	I0920 18:54:59.149947  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 74/120
	I0920 18:55:00.152219  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 75/120
	I0920 18:55:01.153486  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 76/120
	I0920 18:55:02.154927  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 77/120
	I0920 18:55:03.156053  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 78/120
	I0920 18:55:04.157768  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 79/120
	I0920 18:55:05.159559  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 80/120
	I0920 18:55:06.161472  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 81/120
	I0920 18:55:07.162881  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 82/120
	I0920 18:55:08.164125  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 83/120
	I0920 18:55:09.165525  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 84/120
	I0920 18:55:10.167168  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 85/120
	I0920 18:55:11.168580  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 86/120
	I0920 18:55:12.170004  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 87/120
	I0920 18:55:13.171466  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 88/120
	I0920 18:55:14.172755  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 89/120
	I0920 18:55:15.174585  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 90/120
	I0920 18:55:16.175992  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 91/120
	I0920 18:55:17.177410  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 92/120
	I0920 18:55:18.178791  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 93/120
	I0920 18:55:19.180190  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 94/120
	I0920 18:55:20.181948  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 95/120
	I0920 18:55:21.183505  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 96/120
	I0920 18:55:22.184788  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 97/120
	I0920 18:55:23.186266  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 98/120
	I0920 18:55:24.187963  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 99/120
	I0920 18:55:25.189442  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 100/120
	I0920 18:55:26.190752  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 101/120
	I0920 18:55:27.192267  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 102/120
	I0920 18:55:28.193568  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 103/120
	I0920 18:55:29.195162  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 104/120
	I0920 18:55:30.197087  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 105/120
	I0920 18:55:31.198509  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 106/120
	I0920 18:55:32.199982  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 107/120
	I0920 18:55:33.201471  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 108/120
	I0920 18:55:34.202898  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 109/120
	I0920 18:55:35.204665  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 110/120
	I0920 18:55:36.206041  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 111/120
	I0920 18:55:37.207570  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 112/120
	I0920 18:55:38.208862  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 113/120
	I0920 18:55:39.210262  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 114/120
	I0920 18:55:40.212174  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 115/120
	I0920 18:55:41.213696  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 116/120
	I0920 18:55:42.215029  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 117/120
	I0920 18:55:43.216631  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 118/120
	I0920 18:55:44.217971  768107 main.go:141] libmachine: (ha-525790-m03) Waiting for machine to stop 119/120
	I0920 18:55:45.218565  768107 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 18:55:45.218633  768107 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 18:55:45.220748  768107 out.go:201] 
	W0920 18:55:45.222078  768107 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 18:55:45.222093  768107 out.go:270] * 
	* 
	W0920 18:55:45.225833  768107 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:55:45.227098  768107 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-525790 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-525790 --wait=true -v=7 --alsologtostderr
E0920 18:56:24.183907  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:56:51.882145  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:59:55.905036  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:24.180173  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:55.904676  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:06:24.179758  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-525790 --wait=true -v=7 --alsologtostderr: exit status 80 (11m14.616703885s)

                                                
                                                
-- stdout --
	* [ha-525790] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-525790" primary control-plane node in "ha-525790" cluster
	* Updating the running kvm2 "ha-525790" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-525790-m02" control-plane node in "ha-525790" cluster
	* Restarting existing kvm2 VM for "ha-525790-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.149
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.149
	* Verifying Kubernetes components...
	
	* Starting "ha-525790-m03" control-plane node in "ha-525790" cluster
	* Restarting existing kvm2 VM for "ha-525790-m03" ...
	* Found network options:
	  - NO_PROXY=192.168.39.149,192.168.39.246
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.149
	  - env NO_PROXY=192.168.39.149,192.168.39.246
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:55:45.275296  768595 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:55:45.275412  768595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:45.275421  768595 out.go:358] Setting ErrFile to fd 2...
	I0920 18:55:45.275425  768595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:45.275635  768595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:55:45.276210  768595 out.go:352] Setting JSON to false
	I0920 18:55:45.277141  768595 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9495,"bootTime":1726849050,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:55:45.277240  768595 start.go:139] virtualization: kvm guest
	I0920 18:55:45.279445  768595 out.go:177] * [ha-525790] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:55:45.280764  768595 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:55:45.280835  768595 notify.go:220] Checking for updates...
	I0920 18:55:45.283366  768595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:55:45.284696  768595 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:55:45.285940  768595 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:55:45.287169  768595 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:55:45.288409  768595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:55:45.290193  768595 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:45.290315  768595 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:55:45.290797  768595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:55:45.290891  768595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:55:45.306404  768595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37931
	I0920 18:55:45.306820  768595 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:55:45.307492  768595 main.go:141] libmachine: Using API Version  1
	I0920 18:55:45.307521  768595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:55:45.307939  768595 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:55:45.308132  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.343272  768595 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:55:45.344502  768595 start.go:297] selected driver: kvm2
	I0920 18:55:45.344515  768595 start.go:901] validating driver "kvm2" against &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:55:45.344647  768595 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:55:45.344970  768595 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:55:45.345050  768595 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:55:45.360027  768595 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:55:45.360707  768595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:55:45.360736  768595 cni.go:84] Creating CNI manager for ""
	I0920 18:55:45.360793  768595 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:55:45.360859  768595 start.go:340] cluster config:
	{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:55:45.361009  768595 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:55:45.363552  768595 out.go:177] * Starting "ha-525790" primary control-plane node in "ha-525790" cluster
	I0920 18:55:45.364920  768595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:55:45.364979  768595 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:55:45.364990  768595 cache.go:56] Caching tarball of preloaded images
	I0920 18:55:45.365061  768595 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:55:45.365070  768595 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:55:45.365198  768595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:55:45.365394  768595 start.go:360] acquireMachinesLock for ha-525790: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:55:45.365441  768595 start.go:364] duration metric: took 28.871µs to acquireMachinesLock for "ha-525790"
	I0920 18:55:45.365453  768595 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:55:45.365460  768595 fix.go:54] fixHost starting: 
	I0920 18:55:45.365716  768595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:55:45.365748  768595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:55:45.379754  768595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0920 18:55:45.380277  768595 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:55:45.380763  768595 main.go:141] libmachine: Using API Version  1
	I0920 18:55:45.380778  768595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:55:45.381096  768595 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:55:45.381300  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.381472  768595 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:55:45.382944  768595 fix.go:112] recreateIfNeeded on ha-525790: state=Running err=<nil>
	W0920 18:55:45.382979  768595 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:55:45.384708  768595 out.go:177] * Updating the running kvm2 "ha-525790" VM ...
	I0920 18:55:45.385966  768595 machine.go:93] provisionDockerMachine start ...
	I0920 18:55:45.385981  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.386173  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.388503  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.388933  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.388960  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.389104  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.389273  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.389402  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.389518  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.389711  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.389908  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.389919  768595 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:55:45.492072  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:55:45.492099  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.492366  768595 buildroot.go:166] provisioning hostname "ha-525790"
	I0920 18:55:45.492393  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.492559  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.495258  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.495689  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.495715  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.495923  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.496094  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.496279  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.496427  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.496584  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.496775  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.496788  768595 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790 && echo "ha-525790" | sudo tee /etc/hostname
	I0920 18:55:45.611170  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:55:45.611203  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.613965  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.614392  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.614418  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.614605  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.614780  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.614979  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.615163  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.615334  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.615507  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.615522  768595 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:55:45.716203  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:55:45.716236  768595 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:55:45.716258  768595 buildroot.go:174] setting up certificates
	I0920 18:55:45.716266  768595 provision.go:84] configureAuth start
	I0920 18:55:45.716287  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.716546  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:55:45.719410  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.719789  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.719816  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.720053  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.722137  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.722463  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.722483  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.722613  768595 provision.go:143] copyHostCerts
	I0920 18:55:45.722648  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:55:45.722687  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:55:45.722704  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:55:45.722767  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:55:45.722893  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:55:45.722922  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:55:45.722929  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:55:45.722959  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:55:45.723019  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:55:45.723040  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:55:45.723046  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:55:45.723071  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:55:45.723132  768595 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790 san=[127.0.0.1 192.168.39.149 ha-525790 localhost minikube]
	I0920 18:55:45.874751  768595 provision.go:177] copyRemoteCerts
	I0920 18:55:45.874835  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:55:45.874884  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.877528  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.877971  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.878002  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.878210  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.878387  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.878591  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.878724  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:55:45.960427  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:55:45.960518  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 18:55:45.994757  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:55:45.994865  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:55:46.024642  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:55:46.024718  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:55:46.055496  768595 provision.go:87] duration metric: took 339.216483ms to configureAuth
	I0920 18:55:46.055535  768595 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:55:46.055829  768595 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:46.055929  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:46.058831  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:46.059288  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:46.059324  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:46.059533  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:46.059716  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:46.059891  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:46.060010  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:46.060167  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:46.060375  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:46.060391  768595 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:57:16.901155  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:57:16.901195  768595 machine.go:96] duration metric: took 1m31.515216231s to provisionDockerMachine
	I0920 18:57:16.901213  768595 start.go:293] postStartSetup for "ha-525790" (driver="kvm2")
	I0920 18:57:16.901229  768595 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:57:16.901256  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:16.901619  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:57:16.901655  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:16.904582  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:16.905033  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:16.905077  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:16.905237  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:16.905435  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:16.905596  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:16.905768  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:16.986592  768595 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:57:16.990860  768595 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:57:16.990889  768595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:57:16.990948  768595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:57:16.991031  768595 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:57:16.991042  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:57:16.991128  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:57:17.000970  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:57:17.025421  768595 start.go:296] duration metric: took 124.189503ms for postStartSetup
	I0920 18:57:17.025508  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.025853  768595 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0920 18:57:17.025891  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.028640  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.029043  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.029071  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.029274  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.029491  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.029672  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.029818  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	W0920 18:57:17.109879  768595 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0920 18:57:17.109911  768595 fix.go:56] duration metric: took 1m31.744451562s for fixHost
	I0920 18:57:17.109970  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.112933  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.113331  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.113363  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.113469  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.113648  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.113876  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.114026  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.114184  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:57:17.114401  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:57:17.114415  768595 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:57:17.216062  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858637.181333539
	
	I0920 18:57:17.216090  768595 fix.go:216] guest clock: 1726858637.181333539
	I0920 18:57:17.216101  768595 fix.go:229] Guest: 2024-09-20 18:57:17.181333539 +0000 UTC Remote: 2024-09-20 18:57:17.109918074 +0000 UTC m=+91.872102399 (delta=71.415465ms)
	I0920 18:57:17.216125  768595 fix.go:200] guest clock delta is within tolerance: 71.415465ms
	I0920 18:57:17.216130  768595 start.go:83] releasing machines lock for "ha-525790", held for 1m31.850683513s
	I0920 18:57:17.216152  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.216461  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:57:17.219017  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.219376  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.219412  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.219494  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220012  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220193  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220325  768595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:57:17.220390  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.220399  768595 ssh_runner.go:195] Run: cat /version.json
	I0920 18:57:17.220418  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.222866  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223251  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223284  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.223301  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223449  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.223621  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.223790  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.223811  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223813  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.223960  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.223963  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:17.224104  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.224245  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.224417  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:17.296110  768595 ssh_runner.go:195] Run: systemctl --version
	I0920 18:57:17.321175  768595 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:57:17.477104  768595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:57:17.485831  768595 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:57:17.485914  768595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:57:17.495337  768595 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:57:17.495360  768595 start.go:495] detecting cgroup driver to use...
	I0920 18:57:17.495424  768595 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:57:17.511930  768595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:57:17.525328  768595 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:57:17.525387  768595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:57:17.538722  768595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:57:17.552122  768595 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:57:17.698681  768595 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:57:17.845821  768595 docker.go:233] disabling docker service ...
	I0920 18:57:17.845899  768595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:57:17.863738  768595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:57:17.877401  768595 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:57:18.024631  768595 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:57:18.172584  768595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:57:18.186842  768595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:57:18.205846  768595 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:57:18.205925  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.216288  768595 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:57:18.216358  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.226555  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.237201  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.247630  768595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:57:18.257984  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.267924  768595 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.278978  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.288891  768595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:57:18.297865  768595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:57:18.306911  768595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:57:18.446180  768595 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:57:19.895749  768595 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.449526733s)
	I0920 18:57:19.895791  768595 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:57:19.895837  768595 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:57:19.904678  768595 start.go:563] Will wait 60s for crictl version
	I0920 18:57:19.904743  768595 ssh_runner.go:195] Run: which crictl
	I0920 18:57:19.908608  768595 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:57:19.945193  768595 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:57:19.945279  768595 ssh_runner.go:195] Run: crio --version
	I0920 18:57:19.974543  768595 ssh_runner.go:195] Run: crio --version
	I0920 18:57:20.007822  768595 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:57:20.009139  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:57:20.011764  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:20.012169  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:20.012198  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:20.012388  768595 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:57:20.017342  768595 kubeadm.go:883] updating cluster {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:57:20.017482  768595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:57:20.017559  768595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:57:20.062678  768595 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:57:20.062704  768595 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:57:20.062757  768595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:57:20.098285  768595 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:57:20.098310  768595 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:57:20.098320  768595 kubeadm.go:934] updating node { 192.168.39.149 8443 v1.31.1 crio true true} ...
	I0920 18:57:20.098422  768595 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:57:20.098485  768595 ssh_runner.go:195] Run: crio config
	I0920 18:57:20.146689  768595 cni.go:84] Creating CNI manager for ""
	I0920 18:57:20.146719  768595 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:57:20.146731  768595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:57:20.146762  768595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-525790 NodeName:ha-525790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:57:20.146949  768595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-525790"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:57:20.146969  768595 kube-vip.go:115] generating kube-vip config ...
	I0920 18:57:20.147010  768595 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:57:20.158523  768595 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:57:20.158643  768595 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:57:20.158707  768595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:57:20.168660  768595 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:57:20.168733  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 18:57:20.178461  768595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 18:57:20.198566  768595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:57:20.217954  768595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:57:20.237499  768595 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:57:20.258010  768595 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:57:20.262485  768595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:57:20.407038  768595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:57:20.422336  768595 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.149
	I0920 18:57:20.422365  768595 certs.go:194] generating shared ca certs ...
	I0920 18:57:20.422387  768595 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.422549  768595 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:57:20.422595  768595 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:57:20.422607  768595 certs.go:256] generating profile certs ...
	I0920 18:57:20.422714  768595 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:57:20.422742  768595 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0
	I0920 18:57:20.422758  768595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.246 192.168.39.105 192.168.39.254]
	I0920 18:57:20.498103  768595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 ...
	I0920 18:57:20.498146  768595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0: {Name:mkf1c7de4d51cd00dcbb302f98eb38a12aeaa743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.498349  768595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0 ...
	I0920 18:57:20.498366  768595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0: {Name:mkd16bd720a2c366eb4c3af52495872448237117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.498439  768595 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:57:20.498595  768595 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:57:20.498727  768595 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:57:20.498744  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:57:20.498757  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:57:20.498773  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:57:20.498786  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:57:20.498798  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:57:20.498815  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:57:20.498828  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:57:20.498839  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:57:20.498902  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:57:20.498929  768595 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:57:20.498939  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:57:20.498966  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:57:20.498987  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:57:20.499009  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:57:20.499046  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:57:20.499073  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.499086  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.499098  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.499673  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:57:20.526194  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:57:20.550080  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:57:20.573814  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:57:20.597383  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:57:20.621333  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:57:20.644650  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:57:20.669077  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:57:20.692742  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:57:20.716696  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:57:20.740168  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:57:20.763494  768595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:57:20.779948  768595 ssh_runner.go:195] Run: openssl version
	I0920 18:57:20.785654  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:57:20.796055  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.800308  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.800350  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.805711  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:57:20.814658  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:57:20.825022  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.829283  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.829328  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.835197  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:57:20.844367  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:57:20.858330  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.862698  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.862756  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.868290  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:57:20.877322  768595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:57:20.881726  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:57:20.887174  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:57:20.892568  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:57:20.897933  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:57:20.903504  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:57:20.908964  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:57:20.914297  768595 kubeadm.go:392] StartCluster: {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:57:20.914419  768595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:57:20.914479  768595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:57:20.953634  768595 cri.go:89] found id: "25771bcd68395f46a10f7a984281c99bb335a8ca69efb4245fa13e739f74e880"
	I0920 18:57:20.953663  768595 cri.go:89] found id: "05474c6dd3411b2d54bcdb9c489372dbdd009e7696128a025d961ffa61cea90e"
	I0920 18:57:20.953670  768595 cri.go:89] found id: "fdef47cd693637030df15d12b4203fda70a684a6ba84cf20353b69d3f9314810"
	I0920 18:57:20.953675  768595 cri.go:89] found id: "57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90"
	I0920 18:57:20.953679  768595 cri.go:89] found id: "172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1"
	I0920 18:57:20.953684  768595 cri.go:89] found id: "3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e"
	I0920 18:57:20.953688  768595 cri.go:89] found id: "5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98"
	I0920 18:57:20.953692  768595 cri.go:89] found id: "3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8"
	I0920 18:57:20.953696  768595 cri.go:89] found id: "c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc"
	I0920 18:57:20.953705  768595 cri.go:89] found id: "7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706"
	I0920 18:57:20.953709  768595 cri.go:89] found id: "1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb"
	I0920 18:57:20.953727  768595 cri.go:89] found id: "bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93"
	I0920 18:57:20.953734  768595 cri.go:89] found id: "49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72"
	I0920 18:57:20.953738  768595 cri.go:89] found id: ""
	I0920 18:57:20.953792  768595 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-525790 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-525790
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-525790 -n ha-525790
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-525790 logs -n 25: (1.701646958s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m02:/home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m04 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp testdata/cp-test.txt                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790:/home/docker/cp-test_ha-525790-m04_ha-525790.txt                       |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790 sudo cat                                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790.txt                                 |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m02:/home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03:/home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m03 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-525790 node stop m02 -v=7                                                     | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-525790 node start m02 -v=7                                                    | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-525790 -v=7                                                           | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-525790 -v=7                                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-525790 --wait=true -v=7                                                    | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-525790                                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 19:06 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:55:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:55:45.275296  768595 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:55:45.275412  768595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:45.275421  768595 out.go:358] Setting ErrFile to fd 2...
	I0920 18:55:45.275425  768595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:45.275635  768595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:55:45.276210  768595 out.go:352] Setting JSON to false
	I0920 18:55:45.277141  768595 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9495,"bootTime":1726849050,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:55:45.277240  768595 start.go:139] virtualization: kvm guest
	I0920 18:55:45.279445  768595 out.go:177] * [ha-525790] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:55:45.280764  768595 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:55:45.280835  768595 notify.go:220] Checking for updates...
	I0920 18:55:45.283366  768595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:55:45.284696  768595 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:55:45.285940  768595 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:55:45.287169  768595 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:55:45.288409  768595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:55:45.290193  768595 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:45.290315  768595 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:55:45.290797  768595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:55:45.290891  768595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:55:45.306404  768595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37931
	I0920 18:55:45.306820  768595 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:55:45.307492  768595 main.go:141] libmachine: Using API Version  1
	I0920 18:55:45.307521  768595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:55:45.307939  768595 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:55:45.308132  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.343272  768595 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:55:45.344502  768595 start.go:297] selected driver: kvm2
	I0920 18:55:45.344515  768595 start.go:901] validating driver "kvm2" against &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:55:45.344647  768595 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:55:45.344970  768595 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:55:45.345050  768595 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:55:45.360027  768595 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:55:45.360707  768595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:55:45.360736  768595 cni.go:84] Creating CNI manager for ""
	I0920 18:55:45.360793  768595 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:55:45.360859  768595 start.go:340] cluster config:
	{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:55:45.361009  768595 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:55:45.363552  768595 out.go:177] * Starting "ha-525790" primary control-plane node in "ha-525790" cluster
	I0920 18:55:45.364920  768595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:55:45.364979  768595 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:55:45.364990  768595 cache.go:56] Caching tarball of preloaded images
	I0920 18:55:45.365061  768595 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:55:45.365070  768595 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:55:45.365198  768595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:55:45.365394  768595 start.go:360] acquireMachinesLock for ha-525790: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:55:45.365441  768595 start.go:364] duration metric: took 28.871µs to acquireMachinesLock for "ha-525790"
	I0920 18:55:45.365453  768595 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:55:45.365460  768595 fix.go:54] fixHost starting: 
	I0920 18:55:45.365716  768595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:55:45.365748  768595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:55:45.379754  768595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0920 18:55:45.380277  768595 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:55:45.380763  768595 main.go:141] libmachine: Using API Version  1
	I0920 18:55:45.380778  768595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:55:45.381096  768595 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:55:45.381300  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.381472  768595 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:55:45.382944  768595 fix.go:112] recreateIfNeeded on ha-525790: state=Running err=<nil>
	W0920 18:55:45.382979  768595 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:55:45.384708  768595 out.go:177] * Updating the running kvm2 "ha-525790" VM ...
	I0920 18:55:45.385966  768595 machine.go:93] provisionDockerMachine start ...
	I0920 18:55:45.385981  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.386173  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.388503  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.388933  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.388960  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.389104  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.389273  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.389402  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.389518  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.389711  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.389908  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.389919  768595 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:55:45.492072  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:55:45.492099  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.492366  768595 buildroot.go:166] provisioning hostname "ha-525790"
	I0920 18:55:45.492393  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.492559  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.495258  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.495689  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.495715  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.495923  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.496094  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.496279  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.496427  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.496584  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.496775  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.496788  768595 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790 && echo "ha-525790" | sudo tee /etc/hostname
	I0920 18:55:45.611170  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:55:45.611203  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.613965  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.614392  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.614418  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.614605  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.614780  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.614979  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.615163  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.615334  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.615507  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.615522  768595 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:55:45.716203  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:55:45.716236  768595 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:55:45.716258  768595 buildroot.go:174] setting up certificates
	I0920 18:55:45.716266  768595 provision.go:84] configureAuth start
	I0920 18:55:45.716287  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.716546  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:55:45.719410  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.719789  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.719816  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.720053  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.722137  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.722463  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.722483  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.722613  768595 provision.go:143] copyHostCerts
	I0920 18:55:45.722648  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:55:45.722687  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:55:45.722704  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:55:45.722767  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:55:45.722893  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:55:45.722922  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:55:45.722929  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:55:45.722959  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:55:45.723019  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:55:45.723040  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:55:45.723046  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:55:45.723071  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:55:45.723132  768595 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790 san=[127.0.0.1 192.168.39.149 ha-525790 localhost minikube]
	I0920 18:55:45.874751  768595 provision.go:177] copyRemoteCerts
	I0920 18:55:45.874835  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:55:45.874884  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.877528  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.877971  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.878002  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.878210  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.878387  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.878591  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.878724  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:55:45.960427  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:55:45.960518  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 18:55:45.994757  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:55:45.994865  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:55:46.024642  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:55:46.024718  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:55:46.055496  768595 provision.go:87] duration metric: took 339.216483ms to configureAuth
	I0920 18:55:46.055535  768595 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:55:46.055829  768595 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:46.055929  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:46.058831  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:46.059288  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:46.059324  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:46.059533  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:46.059716  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:46.059891  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:46.060010  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:46.060167  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:46.060375  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:46.060391  768595 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:57:16.901155  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:57:16.901195  768595 machine.go:96] duration metric: took 1m31.515216231s to provisionDockerMachine
	I0920 18:57:16.901213  768595 start.go:293] postStartSetup for "ha-525790" (driver="kvm2")
	I0920 18:57:16.901229  768595 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:57:16.901256  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:16.901619  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:57:16.901655  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:16.904582  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:16.905033  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:16.905077  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:16.905237  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:16.905435  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:16.905596  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:16.905768  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:16.986592  768595 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:57:16.990860  768595 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:57:16.990889  768595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:57:16.990948  768595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:57:16.991031  768595 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:57:16.991042  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:57:16.991128  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:57:17.000970  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:57:17.025421  768595 start.go:296] duration metric: took 124.189503ms for postStartSetup
	I0920 18:57:17.025508  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.025853  768595 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0920 18:57:17.025891  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.028640  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.029043  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.029071  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.029274  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.029491  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.029672  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.029818  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	W0920 18:57:17.109879  768595 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0920 18:57:17.109911  768595 fix.go:56] duration metric: took 1m31.744451562s for fixHost
	I0920 18:57:17.109970  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.112933  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.113331  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.113363  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.113469  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.113648  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.113876  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.114026  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.114184  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:57:17.114401  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:57:17.114415  768595 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:57:17.216062  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858637.181333539
	
	I0920 18:57:17.216090  768595 fix.go:216] guest clock: 1726858637.181333539
	I0920 18:57:17.216101  768595 fix.go:229] Guest: 2024-09-20 18:57:17.181333539 +0000 UTC Remote: 2024-09-20 18:57:17.109918074 +0000 UTC m=+91.872102399 (delta=71.415465ms)
	I0920 18:57:17.216125  768595 fix.go:200] guest clock delta is within tolerance: 71.415465ms
	I0920 18:57:17.216130  768595 start.go:83] releasing machines lock for "ha-525790", held for 1m31.850683513s
	I0920 18:57:17.216152  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.216461  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:57:17.219017  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.219376  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.219412  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.219494  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220012  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220193  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220325  768595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:57:17.220390  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.220399  768595 ssh_runner.go:195] Run: cat /version.json
	I0920 18:57:17.220418  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.222866  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223251  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223284  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.223301  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223449  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.223621  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.223790  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.223811  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223813  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.223960  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.223963  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:17.224104  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.224245  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.224417  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:17.296110  768595 ssh_runner.go:195] Run: systemctl --version
	I0920 18:57:17.321175  768595 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:57:17.477104  768595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:57:17.485831  768595 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:57:17.485914  768595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:57:17.495337  768595 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:57:17.495360  768595 start.go:495] detecting cgroup driver to use...
	I0920 18:57:17.495424  768595 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:57:17.511930  768595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:57:17.525328  768595 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:57:17.525387  768595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:57:17.538722  768595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:57:17.552122  768595 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:57:17.698681  768595 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:57:17.845821  768595 docker.go:233] disabling docker service ...
	I0920 18:57:17.845899  768595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:57:17.863738  768595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:57:17.877401  768595 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:57:18.024631  768595 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:57:18.172584  768595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:57:18.186842  768595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:57:18.205846  768595 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:57:18.205925  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.216288  768595 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:57:18.216358  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.226555  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.237201  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.247630  768595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:57:18.257984  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.267924  768595 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.278978  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.288891  768595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:57:18.297865  768595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:57:18.306911  768595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:57:18.446180  768595 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:57:19.895749  768595 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.449526733s)
	I0920 18:57:19.895791  768595 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:57:19.895837  768595 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:57:19.904678  768595 start.go:563] Will wait 60s for crictl version
	I0920 18:57:19.904743  768595 ssh_runner.go:195] Run: which crictl
	I0920 18:57:19.908608  768595 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:57:19.945193  768595 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:57:19.945279  768595 ssh_runner.go:195] Run: crio --version
	I0920 18:57:19.974543  768595 ssh_runner.go:195] Run: crio --version
	I0920 18:57:20.007822  768595 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:57:20.009139  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:57:20.011764  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:20.012169  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:20.012198  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:20.012388  768595 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:57:20.017342  768595 kubeadm.go:883] updating cluster {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:57:20.017482  768595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:57:20.017559  768595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:57:20.062678  768595 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:57:20.062704  768595 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:57:20.062757  768595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:57:20.098285  768595 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:57:20.098310  768595 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:57:20.098320  768595 kubeadm.go:934] updating node { 192.168.39.149 8443 v1.31.1 crio true true} ...
	I0920 18:57:20.098422  768595 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:57:20.098485  768595 ssh_runner.go:195] Run: crio config
	I0920 18:57:20.146689  768595 cni.go:84] Creating CNI manager for ""
	I0920 18:57:20.146719  768595 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:57:20.146731  768595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:57:20.146762  768595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-525790 NodeName:ha-525790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:57:20.146949  768595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-525790"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:57:20.146969  768595 kube-vip.go:115] generating kube-vip config ...
	I0920 18:57:20.147010  768595 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:57:20.158523  768595 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:57:20.158643  768595 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:57:20.158707  768595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:57:20.168660  768595 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:57:20.168733  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 18:57:20.178461  768595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 18:57:20.198566  768595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:57:20.217954  768595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:57:20.237499  768595 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:57:20.258010  768595 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:57:20.262485  768595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:57:20.407038  768595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:57:20.422336  768595 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.149
	I0920 18:57:20.422365  768595 certs.go:194] generating shared ca certs ...
	I0920 18:57:20.422387  768595 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.422549  768595 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:57:20.422595  768595 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:57:20.422607  768595 certs.go:256] generating profile certs ...
	I0920 18:57:20.422714  768595 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:57:20.422742  768595 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0
	I0920 18:57:20.422758  768595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.246 192.168.39.105 192.168.39.254]
	I0920 18:57:20.498103  768595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 ...
	I0920 18:57:20.498146  768595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0: {Name:mkf1c7de4d51cd00dcbb302f98eb38a12aeaa743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.498349  768595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0 ...
	I0920 18:57:20.498366  768595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0: {Name:mkd16bd720a2c366eb4c3af52495872448237117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.498439  768595 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:57:20.498595  768595 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:57:20.498727  768595 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:57:20.498744  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:57:20.498757  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:57:20.498773  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:57:20.498786  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:57:20.498798  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:57:20.498815  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:57:20.498828  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:57:20.498839  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:57:20.498902  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:57:20.498929  768595 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:57:20.498939  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:57:20.498966  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:57:20.498987  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:57:20.499009  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:57:20.499046  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:57:20.499073  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.499086  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.499098  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.499673  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:57:20.526194  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:57:20.550080  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:57:20.573814  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:57:20.597383  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:57:20.621333  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:57:20.644650  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:57:20.669077  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:57:20.692742  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:57:20.716696  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:57:20.740168  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:57:20.763494  768595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:57:20.779948  768595 ssh_runner.go:195] Run: openssl version
	I0920 18:57:20.785654  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:57:20.796055  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.800308  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.800350  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.805711  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:57:20.814658  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:57:20.825022  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.829283  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.829328  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.835197  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:57:20.844367  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:57:20.858330  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.862698  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.862756  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.868290  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:57:20.877322  768595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:57:20.881726  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:57:20.887174  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:57:20.892568  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:57:20.897933  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:57:20.903504  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:57:20.908964  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:57:20.914297  768595 kubeadm.go:392] StartCluster: {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:57:20.914419  768595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:57:20.914479  768595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:57:20.953634  768595 cri.go:89] found id: "25771bcd68395f46a10f7a984281c99bb335a8ca69efb4245fa13e739f74e880"
	I0920 18:57:20.953663  768595 cri.go:89] found id: "05474c6dd3411b2d54bcdb9c489372dbdd009e7696128a025d961ffa61cea90e"
	I0920 18:57:20.953670  768595 cri.go:89] found id: "fdef47cd693637030df15d12b4203fda70a684a6ba84cf20353b69d3f9314810"
	I0920 18:57:20.953675  768595 cri.go:89] found id: "57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90"
	I0920 18:57:20.953679  768595 cri.go:89] found id: "172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1"
	I0920 18:57:20.953684  768595 cri.go:89] found id: "3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e"
	I0920 18:57:20.953688  768595 cri.go:89] found id: "5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98"
	I0920 18:57:20.953692  768595 cri.go:89] found id: "3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8"
	I0920 18:57:20.953696  768595 cri.go:89] found id: "c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc"
	I0920 18:57:20.953705  768595 cri.go:89] found id: "7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706"
	I0920 18:57:20.953709  768595 cri.go:89] found id: "1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb"
	I0920 18:57:20.953727  768595 cri.go:89] found id: "bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93"
	I0920 18:57:20.953734  768595 cri.go:89] found id: "49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72"
	I0920 18:57:20.953738  768595 cri.go:89] found id: ""
	I0920 18:57:20.953792  768595 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.537228663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859220537205676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b58edb64-2ddc-48b0-b3bc-edc4d4131392 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.537740765Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1472996-c421-4e47-bf94-512d0fd0c072 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.537820434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1472996-c421-4e47-bf94-512d0fd0c072 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.538215412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60cc74ea619fb947f0b84edcc3d897bff8752b038a8f9b1725bd5384cedcaabd,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858739639401417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858689630170928,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5187d6ee59db46aeb3871c648064f41c7129c72b4fb12215dcfa9ff690e3dacc,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858688628099955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561
e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858647533318453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1472996-c421-4e47-bf94-512d0fd0c072 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.582461549Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1dd4245a-bf08-46ca-80b1-21e16803dff6 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.582548535Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1dd4245a-bf08-46ca-80b1-21e16803dff6 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.583516256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78d2d0e2-2054-46eb-9883-7d3ee4041214 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.584009630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859220583985200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78d2d0e2-2054-46eb-9883-7d3ee4041214 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.584944686Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53585b2b-0a0a-4082-a845-516f1a08aa31 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.585020586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53585b2b-0a0a-4082-a845-516f1a08aa31 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.585516642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60cc74ea619fb947f0b84edcc3d897bff8752b038a8f9b1725bd5384cedcaabd,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858739639401417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858689630170928,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5187d6ee59db46aeb3871c648064f41c7129c72b4fb12215dcfa9ff690e3dacc,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858688628099955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561
e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858647533318453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53585b2b-0a0a-4082-a845-516f1a08aa31 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.630122690Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=125ecb08-5807-426c-a2a3-c1ffa7d3ce6f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.630214457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=125ecb08-5807-426c-a2a3-c1ffa7d3ce6f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.631124108Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59ed7d95-f7bc-4113-912b-95c2a04068c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.631644603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859220631621484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59ed7d95-f7bc-4113-912b-95c2a04068c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.632137982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fec38b7-c945-42cb-b1d2-1094f1816443 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.632190736Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fec38b7-c945-42cb-b1d2-1094f1816443 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.632629355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60cc74ea619fb947f0b84edcc3d897bff8752b038a8f9b1725bd5384cedcaabd,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858739639401417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858689630170928,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5187d6ee59db46aeb3871c648064f41c7129c72b4fb12215dcfa9ff690e3dacc,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858688628099955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561
e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858647533318453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fec38b7-c945-42cb-b1d2-1094f1816443 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.675312551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc61586b-4519-4915-95c5-83172fdc43d7 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.675390829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc61586b-4519-4915-95c5-83172fdc43d7 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.676691606Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5eead9c5-3622-4b35-ba4c-1be243702659 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.677245531Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859220677221673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5eead9c5-3622-4b35-ba4c-1be243702659 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.677744624Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=032739a6-8ce9-487b-a737-eb0395d168bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.677815432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=032739a6-8ce9-487b-a737-eb0395d168bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:00 ha-525790 crio[3621]: time="2024-09-20 19:07:00.679402258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60cc74ea619fb947f0b84edcc3d897bff8752b038a8f9b1725bd5384cedcaabd,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858739639401417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858689630170928,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5187d6ee59db46aeb3871c648064f41c7129c72b4fb12215dcfa9ff690e3dacc,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858688628099955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561
e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858647533318453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=032739a6-8ce9-487b-a737-eb0395d168bb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60cc74ea619fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       4                   16a2a1305a51f       storage-provisioner
	613c15024e982       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago       Running             kube-apiserver            3                   64ba18194b8ce       kube-apiserver-ha-525790
	5187d6ee59db4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Exited              storage-provisioner       3                   16a2a1305a51f       storage-provisioner
	d017a5b283a90       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago       Running             kube-controller-manager   2                   4014793ae3deb       kube-controller-manager-ha-525790
	667c79074c454       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      8 minutes ago       Running             busybox                   1                   0a6e91416ea52       busybox-7dff88458-z26jr
	22f33ac894101       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      9 minutes ago       Running             kube-vip                  0                   63cc3aec72e5a       kube-vip-ha-525790
	a2c9c9f659f7c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      9 minutes ago       Running             coredns                   1                   f4deab987a6c3       coredns-7c65d6cfc9-nfnkj
	fefbc436d3eff       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      9 minutes ago       Running             kube-proxy                1                   146e6c4948059       kube-proxy-958jz
	c5c19fcb571e8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      9 minutes ago       Running             etcd                      1                   947865a8625cf       etcd-ha-525790
	a1977c4370e57       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      9 minutes ago       Running             coredns                   1                   dddb1e001fdf1       coredns-7c65d6cfc9-rpcds
	6cf18d395747b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      9 minutes ago       Running             kube-scheduler            1                   f3b7300b04471       kube-scheduler-ha-525790
	041c8157b3922       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      9 minutes ago       Running             kindnet-cni               1                   097a4985f63bc       kindnet-9qbm6
	8a18e65180dd2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      9 minutes ago       Exited              kube-apiserver            2                   64ba18194b8ce       kube-apiserver-ha-525790
	231315ec7d013       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      9 minutes ago       Exited              kube-controller-manager   1                   4014793ae3deb       kube-controller-manager-ha-525790
	344b03b51dddb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   17 minutes ago      Exited              busybox                   0                   125671e39b996       busybox-7dff88458-z26jr
	172e8f75d2a84       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Exited              coredns                   0                   5dbd6acffd5c5       coredns-7c65d6cfc9-nfnkj
	3dff404b6ad2a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Exited              coredns                   0                   34517f9f64c86       coredns-7c65d6cfc9-rpcds
	5579930bef0fc       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      19 minutes ago      Exited              kindnet-cni               0                   64136f65f6d34       kindnet-9qbm6
	3d469134674c2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      19 minutes ago      Exited              kube-proxy                0                   2e440a5ac73b7       kube-proxy-958jz
	7d0496391eb85       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      19 minutes ago      Exited              kube-scheduler            0                   fae09dfcf3d6f       kube-scheduler-ha-525790
	bcca29b119984       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Exited              etcd                      0                   17818940c2036       etcd-ha-525790
	
	
	==> coredns [172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1] <==
	[INFO] 10.244.1.2:49534 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164113s
	[INFO] 10.244.2.2:50032 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167479s
	[INFO] 10.244.2.2:33413 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001865571s
	[INFO] 10.244.0.4:38374 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010475s
	[INFO] 10.244.0.4:44676 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170058s
	[INFO] 10.244.0.4:54182 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123082s
	[INFO] 10.244.0.4:52067 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108075s
	[INFO] 10.244.1.2:36885 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133944s
	[INFO] 10.244.2.2:48327 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127372s
	[INFO] 10.244.2.2:52262 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160755s
	[INFO] 10.244.0.4:44171 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111758s
	[INFO] 10.244.1.2:36220 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196033s
	[INFO] 10.244.1.2:33859 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222322s
	[INFO] 10.244.1.2:55349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158431s
	[INFO] 10.244.2.2:37976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138385s
	[INFO] 10.244.2.2:56378 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000191303s
	[INFO] 10.244.2.2:54246 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117607s
	[INFO] 10.244.0.4:53115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116565s
	[INFO] 10.244.0.4:49608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095821s
	[INFO] 10.244.0.4:60862 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111997s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e] <==
	[INFO] 10.244.2.2:42750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000311517s
	[INFO] 10.244.2.2:42748 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319529s
	[INFO] 10.244.2.2:49203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190348s
	[INFO] 10.244.2.2:44849 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019366s
	[INFO] 10.244.2.2:52186 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103082s
	[INFO] 10.244.0.4:58300 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140735s
	[INFO] 10.244.0.4:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001702673s
	[INFO] 10.244.0.4:33721 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001170599s
	[INFO] 10.244.0.4:42180 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061647s
	[INFO] 10.244.1.2:49177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333372s
	[INFO] 10.244.1.2:57192 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147894s
	[INFO] 10.244.1.2:59125 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095482s
	[INFO] 10.244.2.2:50879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019818s
	[INFO] 10.244.2.2:47467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096359s
	[INFO] 10.244.0.4:54464 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087148s
	[INFO] 10.244.0.4:40326 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011895s
	[INFO] 10.244.0.4:46142 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071583s
	[INFO] 10.244.1.2:50168 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000224622s
	[INFO] 10.244.2.2:50611 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117577s
	[INFO] 10.244.0.4:57391 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000320119s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1717&timeout=7m26s&timeoutSeconds=446&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1736&timeout=7m56s&timeoutSeconds=476&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1713&timeout=6m15s&timeoutSeconds=375&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[211372876]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 18:57:36.158) (total time: 10001ms):
	Trace[211372876]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:57:46.159)
	Trace[211372876]: [10.001611713s] [10.001611713s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33172->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33172->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[159325140]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 18:57:32.397) (total time: 10001ms):
	Trace[159325140]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:57:42.399)
	Trace[159325140]: [10.001577176s] [10.001577176s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:56710->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:56710->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56726->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56726->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-525790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_47_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:47:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:07:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:03:15 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:03:15 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:03:15 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:03:15 +0000   Fri, 20 Sep 2024 18:47:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-525790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3f2b96a8819496a94e034cf4adf7a85
	  System UUID:                d3f2b96a-8819-496a-94e0-34cf4adf7a85
	  Boot ID:                    02f79ecd-567f-4683-83ce-59afb46feab6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z26jr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-nfnkj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 coredns-7c65d6cfc9-rpcds             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 etcd-ha-525790                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-9qbm6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-525790             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-525790    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-958jz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-525790             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-525790                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 19m                  kube-proxy       
	  Normal   Starting                 8m51s                kube-proxy       
	  Normal   NodeHasSufficientPID     19m                  kubelet          Node ha-525790 status is now: NodeHasSufficientPID
	  Normal   Starting                 19m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  19m                  kubelet          Node ha-525790 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                  kubelet          Node ha-525790 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           19m                  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal   NodeReady                19m                  kubelet          Node ha-525790 status is now: NodeReady
	  Normal   RegisteredNode           18m                  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal   RegisteredNode           17m                  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal   NodeNotReady             9m54s (x3 over 10m)  kubelet          Node ha-525790 status is now: NodeNotReady
	  Warning  ContainerGCFailed        9m42s (x2 over 10m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           8m56s                node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal   RegisteredNode           8m47s                node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	
	
	Name:               ha-525790-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_48_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:48:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:06:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:04:00 +0000   Fri, 20 Sep 2024 18:58:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:04:00 +0000   Fri, 20 Sep 2024 18:58:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:04:00 +0000   Fri, 20 Sep 2024 18:58:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:04:00 +0000   Fri, 20 Sep 2024 18:58:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-525790-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dbde4511fc24bbcb1281f7b7d6ff24f
	  System UUID:                1dbde451-1fc2-4bbc-b128-1f7b7d6ff24f
	  Boot ID:                    d5658712-0cd7-4a8d-96e7-dd80ca41efeb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7jtss                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-525790-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-8glgp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      18m
	  kube-system                 kube-apiserver-ha-525790-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-525790-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-sspfs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-525790-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-525790-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 8m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)      kubelet          Node ha-525790-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)      kubelet          Node ha-525790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)      kubelet          Node ha-525790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                    node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  NodeNotReady             15m                    node-controller  Node ha-525790-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  9m17s (x8 over 9m17s)  kubelet          Node ha-525790-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    9m17s (x8 over 9m17s)  kubelet          Node ha-525790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s (x7 over 9m17s)  kubelet          Node ha-525790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m56s                  node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           8m47s                  node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	
	
	Name:               ha-525790-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_49_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:49:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:06:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:04:36 +0000   Fri, 20 Sep 2024 18:58:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:04:36 +0000   Fri, 20 Sep 2024 18:58:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:04:36 +0000   Fri, 20 Sep 2024 18:58:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:04:36 +0000   Fri, 20 Sep 2024 18:58:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-525790-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 007556c5fa674bcd927152e3b0cca9b2
	  System UUID:                007556c5-fa67-4bcd-9271-52e3b0cca9b2
	  Boot ID:                    bc1cee4e-e43a-4f8e-a46b-33fd1ad9466b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jmx4g                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-525790-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kindnet-j5mmq                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	  kube-system                 kube-apiserver-ha-525790-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-525790-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-dx9pg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-525790-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-525790-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 17m                  kube-proxy       
	  Normal   Starting                 7m54s                kube-proxy       
	  Normal   NodeAllocatableEnforced  17m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)    kubelet          Node ha-525790-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)    kubelet          Node ha-525790-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)    kubelet          Node ha-525790-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                  node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	  Normal   RegisteredNode           17m                  node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	  Normal   RegisteredNode           17m                  node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	  Normal   RegisteredNode           8m56s                node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	  Normal   RegisteredNode           8m47s                node-controller  Node ha-525790-m03 event: Registered Node ha-525790-m03 in Controller
	  Normal   NodeNotReady             8m16s                node-controller  Node ha-525790-m03 status is now: NodeNotReady
	  Normal   Starting                 8m2s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m2s (x2 over 8m2s)  kubelet          Node ha-525790-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m2s (x2 over 8m2s)  kubelet          Node ha-525790-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m2s (x2 over 8m2s)  kubelet          Node ha-525790-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8m2s                 kubelet          Node ha-525790-m03 has been rebooted, boot id: bc1cee4e-e43a-4f8e-a46b-33fd1ad9466b
	  Normal   NodeReady                8m2s                 kubelet          Node ha-525790-m03 status is now: NodeReady
	
	
	Name:               ha-525790-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_50_26_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:50:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:53:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:58:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:58:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:58:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:58:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-525790-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c58d814e5e5d49b699d9f977eb54ff58
	  System UUID:                c58d814e-5e5d-49b6-99d9-f977eb54ff58
	  Boot ID:                    69924ac5-b6f2-4ddd-bd0d-fa3c683681d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-df8hf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-proxy-w98cx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x2 over 16m)  kubelet          Node ha-525790-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x2 over 16m)  kubelet          Node ha-525790-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x2 over 16m)  kubelet          Node ha-525790-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  NodeReady                16m                kubelet          Node ha-525790-m04 status is now: NodeReady
	  Normal  RegisteredNode           8m56s              node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           8m47s              node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  NodeNotReady             8m16s              node-controller  Node ha-525790-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep20 18:47] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.053987] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058272] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.180542] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.143015] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.280287] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.923962] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +3.905808] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.064972] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.290695] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.091789] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.472602] kauditd_printk_skb: 36 callbacks suppressed
	[ +11.974718] kauditd_printk_skb: 23 callbacks suppressed
	[Sep20 18:48] kauditd_printk_skb: 24 callbacks suppressed
	[Sep20 18:57] systemd-fstab-generator[3539]: Ignoring "noauto" option for root device
	[  +0.144284] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +0.175904] systemd-fstab-generator[3567]: Ignoring "noauto" option for root device
	[  +0.158366] systemd-fstab-generator[3579]: Ignoring "noauto" option for root device
	[  +0.266903] systemd-fstab-generator[3607]: Ignoring "noauto" option for root device
	[  +1.960904] systemd-fstab-generator[3709]: Ignoring "noauto" option for root device
	[  +6.729271] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.246162] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.067162] kauditd_printk_skb: 1 callbacks suppressed
	[Sep20 18:58] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93] <==
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T18:55:46.260496Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.149:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:55:46.260580Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.149:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:55:46.260677Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ba3e3e863cacc4d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-20T18:55:46.260889Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.260941Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.260980Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261097Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261203Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261373Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261477Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261501Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261581Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261695Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261817Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261878Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261957Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261987Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.264966Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"warn","ts":"2024-09-20T18:55:46.265043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.834260925s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-20T18:55:46.265109Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-09-20T18:55:46.265138Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-525790","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.149:2380"],"advertise-client-urls":["https://192.168.39.149:2379"]}
	
	
	==> etcd [c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1] <==
	{"level":"warn","ts":"2024-09-20T19:06:33.864316Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:35.444109Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:35.444168Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:38.864545Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:38.864555Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:39.445638Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:39.445696Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:43.447641Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:43.447692Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:43.865635Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:43.865652Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:47.449808Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:47.449864Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:48.866015Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:48.865957Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:51.451602Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:51.451671Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:53.866413Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:53.866484Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:55.453403Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:55.453528Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:58.867162Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:58.867211Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:59.455812Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:59.455942Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	
	
	==> kernel <==
	 19:07:01 up 20 min,  0 users,  load average: 0.55, 0.37, 0.32
	Linux ha-525790 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4] <==
	I0920 19:06:28.907543       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 19:06:38.905471       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 19:06:38.905574       1 main.go:299] handling current node
	I0920 19:06:38.905606       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 19:06:38.905624       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 19:06:38.905765       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 19:06:38.905786       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 19:06:38.905836       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:06:38.905865       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:06:48.912129       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 19:06:48.912193       1 main.go:299] handling current node
	I0920 19:06:48.912215       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 19:06:48.912224       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 19:06:48.912465       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 19:06:48.912503       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 19:06:48.912579       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:06:48.912587       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:06:58.914181       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 19:06:58.914363       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 19:06:58.914504       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:06:58.914530       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:06:58.914610       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 19:06:58.914630       1 main.go:299] handling current node
	I0920 19:06:58.914657       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 19:06:58.914674       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98] <==
	I0920 18:55:25.880711       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:55:25.880821       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:55:25.880982       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:55:25.881006       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:55:25.881062       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:55:25.881173       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 18:55:25.881330       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:55:25.881366       1 main.go:299] handling current node
	I0920 18:55:35.880574       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:55:35.880753       1 main.go:299] handling current node
	I0920 18:55:35.880788       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:55:35.880809       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:55:35.880968       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:55:35.881037       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:55:35.881188       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:55:35.881225       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	E0920 18:55:44.415378       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	I0920 18:55:45.880519       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:55:45.880573       1 main.go:299] handling current node
	I0920 18:55:45.880594       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:55:45.880614       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:55:45.880735       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:55:45.880740       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:55:45.880784       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:55:45.880788       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6] <==
	I0920 18:58:11.563790       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 18:58:11.566669       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:58:11.647086       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:58:11.649520       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:58:11.649641       1 policy_source.go:224] refreshing policies
	I0920 18:58:11.663884       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:58:11.678760       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 18:58:11.678792       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:58:11.679064       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:58:11.679104       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:58:11.679119       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:58:11.679124       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:58:11.679129       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:58:11.711573       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0920 18:58:11.731002       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I0920 18:58:11.733911       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:58:11.740737       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:58:11.743338       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:58:11.744337       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:58:11.750631       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 18:58:11.753464       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 18:58:11.759128       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0920 18:58:11.762522       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0920 18:58:12.550864       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 18:58:13.073998       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149 192.168.39.246]
	
	
	==> kube-apiserver [8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40] <==
	I0920 18:57:28.372014       1 options.go:228] external host was not specified, using 192.168.39.149
	I0920 18:57:28.378488       1 server.go:142] Version: v1.31.1
	I0920 18:57:28.378636       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:57:28.839362       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0920 18:57:28.848920       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 18:57:28.849194       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 18:57:28.849607       1 instance.go:232] Using reconciler: lease
	I0920 18:57:28.850178       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0920 18:57:48.834604       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0920 18:57:48.834768       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0920 18:57:48.850633       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829] <==
	I0920 18:57:29.220757       1 serving.go:386] Generated self-signed cert in-memory
	I0920 18:57:29.591669       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0920 18:57:29.591756       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:57:29.593495       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:57:29.593650       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 18:57:29.593717       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0920 18:57:29.593811       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0920 18:57:49.858234       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.149:8443/healthz\": dial tcp 192.168.39.149:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38] <==
	I0920 18:58:33.309535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="112.221µs"
	I0920 18:58:36.778429       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-pnvlm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-pnvlm\": the object has been modified; please apply your changes to the latest version and try again"
	I0920 18:58:36.778846       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f8801889-7675-43c1-a95d-31fccee966d2", APIVersion:"v1", ResourceVersion:"254", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-pnvlm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-pnvlm": the object has been modified; please apply your changes to the latest version and try again
	I0920 18:58:36.817540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.323955ms"
	I0920 18:58:36.817727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.48µs"
	I0920 18:58:45.303183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:58:45.306837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:58:45.332813       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:58:45.351564       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:58:45.470358       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.652538ms"
	I0920 18:58:45.470446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.684µs"
	I0920 18:58:49.788903       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:58:50.566798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:58:53.132790       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 18:58:59.474045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:58:59.492472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:58:59.712336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:59:00.361226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="74.989µs"
	I0920 18:59:00.746940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:59:11.733735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.059122ms"
	I0920 18:59:11.735538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.244µs"
	I0920 18:59:29.887985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 19:03:15.447982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790"
	I0920 19:04:00.014795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 19:04:36.164186       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	
	
	==> kube-proxy [3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8] <==
	E0920 18:54:29.445959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:32.517585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:32.517837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:32.517758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:32.518080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:38.533704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:38.533801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:44.681435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:44.681569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:47.750519       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:47.750703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:50.823395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:50.823514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:03.112602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:03.112697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:06.182242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:06.182543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:12.325811       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:12.325963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:30.759875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:30.760470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:46.118426       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:46.118557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:46.118676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:46.118740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:57:30.565761       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 18:57:33.640050       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 18:57:36.709752       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 18:57:42.859625       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 18:57:52.070869       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0920 18:58:09.389517       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.149"]
	E0920 18:58:09.389799       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:58:09.436483       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:58:09.436606       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:58:09.436666       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:58:09.441038       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:58:09.441633       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:58:09.442010       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:58:09.446213       1 config.go:199] "Starting service config controller"
	I0920 18:58:09.446402       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:58:09.446500       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:58:09.446578       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:58:09.448018       1 config.go:328] "Starting node config controller"
	I0920 18:58:09.448140       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:58:09.548198       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:58:09.548215       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:58:09.548482       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a] <==
	W0920 18:58:04.819433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.149:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:04.819512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.149:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:05.565781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.149:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:05.565894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.149:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:06.403484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.149:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:06.403602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.149:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:06.898106       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.149:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:06.898186       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.149:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:06.917971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.149:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:06.918035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.149:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:07.845992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.149:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:07.846092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.149:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:08.543222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.149:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:08.543360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.149:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:08.927724       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.149:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:08.927853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.149:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:09.205871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.149:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:09.205944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.149:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:11.576509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:58:11.576628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:58:11.576905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:58:11.577007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:58:11.577232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:58:11.577486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 18:58:30.575744       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706] <==
	I0920 18:50:26.263699       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w98cx" node="ha-525790-m04"
	E0920 18:50:26.297985       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298064       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9ff40332-cdad-4e9f-99ca-28d1271713a8(kube-system/kindnet-hwgsh) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-hwgsh"
	E0920 18:50:26.298079       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" pod="kube-system/kindnet-hwgsh"
	I0920 18:50:26.298095       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298461       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	E0920 18:50:26.298512       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 340d5abf-2e79-4cc0-8f1f-130c1e176259(kube-system/kube-proxy-rh89s) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rh89s"
	E0920 18:50:26.298529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" pod="kube-system/kube-proxy-rh89s"
	I0920 18:50:26.298548       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	E0920 18:55:33.838133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0920 18:55:34.010012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:34.163933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:35.197228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:38.126361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0920 18:55:38.323639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0920 18:55:39.518704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0920 18:55:39.859524       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0920 18:55:40.061646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:40.131765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0920 18:55:41.452147       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0920 18:55:43.449377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0920 18:55:44.076439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0920 18:55:44.277830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0920 18:55:45.932626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0920 18:55:46.167365       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 20 19:05:29 ha-525790 kubelet[1305]: E0920 19:05:29.994863    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859129994500825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:05:29 ha-525790 kubelet[1305]: E0920 19:05:29.994903    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859129994500825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:05:39 ha-525790 kubelet[1305]: E0920 19:05:39.997200    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859139996916490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:05:39 ha-525790 kubelet[1305]: E0920 19:05:39.997304    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859139996916490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:05:49 ha-525790 kubelet[1305]: E0920 19:05:49.999055    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859149998696986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:05:49 ha-525790 kubelet[1305]: E0920 19:05:49.999377    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859149998696986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:00 ha-525790 kubelet[1305]: E0920 19:06:00.001953    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859160001440634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:00 ha-525790 kubelet[1305]: E0920 19:06:00.001978    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859160001440634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:10 ha-525790 kubelet[1305]: E0920 19:06:10.003368    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859170002995065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:10 ha-525790 kubelet[1305]: E0920 19:06:10.003451    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859170002995065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:19 ha-525790 kubelet[1305]: E0920 19:06:19.637558    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 19:06:19 ha-525790 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 19:06:19 ha-525790 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 19:06:19 ha-525790 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 19:06:19 ha-525790 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 19:06:20 ha-525790 kubelet[1305]: E0920 19:06:20.006100    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859180005668944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:20 ha-525790 kubelet[1305]: E0920 19:06:20.006127    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859180005668944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:30 ha-525790 kubelet[1305]: E0920 19:06:30.007559    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859190007239612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:30 ha-525790 kubelet[1305]: E0920 19:06:30.007607    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859190007239612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:40 ha-525790 kubelet[1305]: E0920 19:06:40.010351    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859200009651480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:40 ha-525790 kubelet[1305]: E0920 19:06:40.010404    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859200009651480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:50 ha-525790 kubelet[1305]: E0920 19:06:50.012654    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859210012010968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:50 ha-525790 kubelet[1305]: E0920 19:06:50.013017    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859210012010968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:07:00 ha-525790 kubelet[1305]: E0920 19:07:00.015090    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859220014798886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:07:00 ha-525790 kubelet[1305]: E0920 19:07:00.015116    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859220014798886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:07:00.258325  771253 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19678-739831/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-525790 -n ha-525790
helpers_test.go:261: (dbg) Run:  kubectl --context ha-525790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: etcd-ha-525790-m03 kube-controller-manager-ha-525790-m03 kube-vip-ha-525790-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-525790 describe pod etcd-ha-525790-m03 kube-controller-manager-ha-525790-m03 kube-vip-ha-525790-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-525790 describe pod etcd-ha-525790-m03 kube-controller-manager-ha-525790-m03 kube-vip-ha-525790-m03: exit status 1 (65.850027ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "etcd-ha-525790-m03" not found
	Error from server (NotFound): pods "kube-controller-manager-ha-525790-m03" not found
	Error from server (NotFound): pods "kube-vip-ha-525790-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-525790 describe pod etcd-ha-525790-m03 kube-controller-manager-ha-525790-m03 kube-vip-ha-525790-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (799.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-525790 node delete m03 -v=7 --alsologtostderr: (5.496761391s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr: exit status 7 (479.291068ms)

                                                
                                                
-- stdout --
	ha-525790
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-525790-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-525790-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:07:07.895901  771512 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:07:07.896021  771512 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:07:07.896030  771512 out.go:358] Setting ErrFile to fd 2...
	I0920 19:07:07.896034  771512 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:07:07.896208  771512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 19:07:07.896372  771512 out.go:352] Setting JSON to false
	I0920 19:07:07.896403  771512 mustload.go:65] Loading cluster: ha-525790
	I0920 19:07:07.896456  771512 notify.go:220] Checking for updates...
	I0920 19:07:07.896769  771512 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:07:07.896786  771512 status.go:174] checking status of ha-525790 ...
	I0920 19:07:07.897176  771512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:07:07.897216  771512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:07:07.912643  771512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45501
	I0920 19:07:07.913081  771512 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:07:07.913752  771512 main.go:141] libmachine: Using API Version  1
	I0920 19:07:07.913788  771512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:07:07.914113  771512 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:07:07.914257  771512 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 19:07:07.916598  771512 status.go:364] ha-525790 host status = "Running" (err=<nil>)
	I0920 19:07:07.916615  771512 host.go:66] Checking if "ha-525790" exists ...
	I0920 19:07:07.916902  771512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:07:07.916939  771512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:07:07.932089  771512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I0920 19:07:07.932519  771512 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:07:07.933021  771512 main.go:141] libmachine: Using API Version  1
	I0920 19:07:07.933041  771512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:07:07.933316  771512 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:07:07.933516  771512 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 19:07:07.936581  771512 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 19:07:07.937040  771512 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 19:07:07.937067  771512 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 19:07:07.937192  771512 host.go:66] Checking if "ha-525790" exists ...
	I0920 19:07:07.937484  771512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:07:07.937522  771512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:07:07.952733  771512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34629
	I0920 19:07:07.953232  771512 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:07:07.953665  771512 main.go:141] libmachine: Using API Version  1
	I0920 19:07:07.953688  771512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:07:07.954005  771512 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:07:07.954177  771512 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 19:07:07.954360  771512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:07:07.954385  771512 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 19:07:07.957002  771512 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 19:07:07.957439  771512 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 19:07:07.957464  771512 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 19:07:07.957691  771512 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 19:07:07.957887  771512 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 19:07:07.958043  771512 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 19:07:07.958167  771512 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 19:07:08.039109  771512 ssh_runner.go:195] Run: systemctl --version
	I0920 19:07:08.046267  771512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:07:08.062843  771512 kubeconfig.go:125] found "ha-525790" server: "https://192.168.39.254:8443"
	I0920 19:07:08.062911  771512 api_server.go:166] Checking apiserver status ...
	I0920 19:07:08.062948  771512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:08.077819  771512 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5100/cgroup
	W0920 19:07:08.088814  771512 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5100/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:07:08.088873  771512 ssh_runner.go:195] Run: ls
	I0920 19:07:08.093366  771512 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0920 19:07:08.098102  771512 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0920 19:07:08.098124  771512 status.go:456] ha-525790 apiserver status = Running (err=<nil>)
	I0920 19:07:08.098137  771512 status.go:176] ha-525790 status: &{Name:ha-525790 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:07:08.098167  771512 status.go:174] checking status of ha-525790-m02 ...
	I0920 19:07:08.098455  771512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:07:08.098496  771512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:07:08.115089  771512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36941
	I0920 19:07:08.115641  771512 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:07:08.116146  771512 main.go:141] libmachine: Using API Version  1
	I0920 19:07:08.116170  771512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:07:08.116488  771512 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:07:08.116730  771512 main.go:141] libmachine: (ha-525790-m02) Calling .GetState
	I0920 19:07:08.118335  771512 status.go:364] ha-525790-m02 host status = "Running" (err=<nil>)
	I0920 19:07:08.118355  771512 host.go:66] Checking if "ha-525790-m02" exists ...
	I0920 19:07:08.118658  771512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:07:08.118695  771512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:07:08.133548  771512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0920 19:07:08.134047  771512 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:07:08.134563  771512 main.go:141] libmachine: Using API Version  1
	I0920 19:07:08.134597  771512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:07:08.134932  771512 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:07:08.135113  771512 main.go:141] libmachine: (ha-525790-m02) Calling .GetIP
	I0920 19:07:08.138213  771512 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 19:07:08.138729  771512 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:57:32 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 19:07:08.138749  771512 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 19:07:08.138970  771512 host.go:66] Checking if "ha-525790-m02" exists ...
	I0920 19:07:08.139272  771512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:07:08.139324  771512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:07:08.157881  771512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0920 19:07:08.158376  771512 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:07:08.158947  771512 main.go:141] libmachine: Using API Version  1
	I0920 19:07:08.158972  771512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:07:08.159286  771512 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:07:08.159468  771512 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 19:07:08.159639  771512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:07:08.159662  771512 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 19:07:08.162425  771512 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 19:07:08.162934  771512 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:57:32 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 19:07:08.162959  771512 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 19:07:08.163065  771512 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 19:07:08.163233  771512 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 19:07:08.163373  771512 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 19:07:08.163505  771512 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 19:07:08.250516  771512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:07:08.266926  771512 kubeconfig.go:125] found "ha-525790" server: "https://192.168.39.254:8443"
	I0920 19:07:08.266961  771512 api_server.go:166] Checking apiserver status ...
	I0920 19:07:08.267019  771512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:07:08.287214  771512 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1457/cgroup
	W0920 19:07:08.297420  771512 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1457/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:07:08.297472  771512 ssh_runner.go:195] Run: ls
	I0920 19:07:08.305482  771512 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0920 19:07:08.312121  771512 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0920 19:07:08.312148  771512 status.go:456] ha-525790-m02 apiserver status = Running (err=<nil>)
	I0920 19:07:08.312157  771512 status.go:176] ha-525790-m02 status: &{Name:ha-525790-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:07:08.312178  771512 status.go:174] checking status of ha-525790-m04 ...
	I0920 19:07:08.312520  771512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:07:08.312560  771512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:07:08.327922  771512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0920 19:07:08.328399  771512 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:07:08.328882  771512 main.go:141] libmachine: Using API Version  1
	I0920 19:07:08.328912  771512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:07:08.329245  771512 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:07:08.329436  771512 main.go:141] libmachine: (ha-525790-m04) Calling .GetState
	I0920 19:07:08.330926  771512 status.go:364] ha-525790-m04 host status = "Stopped" (err=<nil>)
	I0920 19:07:08.330942  771512 status.go:377] host is not running, skipping remaining checks
	I0920 19:07:08.330949  771512 status.go:176] ha-525790-m04 status: &{Name:ha-525790-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-525790 -n ha-525790
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-525790 logs -n 25: (1.635093798s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m04 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp testdata/cp-test.txt                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790:/home/docker/cp-test_ha-525790-m04_ha-525790.txt                       |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790 sudo cat                                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790.txt                                 |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m02:/home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03:/home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m03 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-525790 node stop m02 -v=7                                                     | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-525790 node start m02 -v=7                                                    | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-525790 -v=7                                                           | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-525790 -v=7                                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-525790 --wait=true -v=7                                                    | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-525790                                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 19:06 UTC |                     |
	| node    | ha-525790 node delete m03 -v=7                                                   | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 19:07 UTC | 20 Sep 24 19:07 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:55:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:55:45.275296  768595 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:55:45.275412  768595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:45.275421  768595 out.go:358] Setting ErrFile to fd 2...
	I0920 18:55:45.275425  768595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:45.275635  768595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:55:45.276210  768595 out.go:352] Setting JSON to false
	I0920 18:55:45.277141  768595 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9495,"bootTime":1726849050,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:55:45.277240  768595 start.go:139] virtualization: kvm guest
	I0920 18:55:45.279445  768595 out.go:177] * [ha-525790] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:55:45.280764  768595 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:55:45.280835  768595 notify.go:220] Checking for updates...
	I0920 18:55:45.283366  768595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:55:45.284696  768595 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:55:45.285940  768595 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:55:45.287169  768595 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:55:45.288409  768595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:55:45.290193  768595 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:45.290315  768595 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:55:45.290797  768595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:55:45.290891  768595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:55:45.306404  768595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37931
	I0920 18:55:45.306820  768595 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:55:45.307492  768595 main.go:141] libmachine: Using API Version  1
	I0920 18:55:45.307521  768595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:55:45.307939  768595 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:55:45.308132  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.343272  768595 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:55:45.344502  768595 start.go:297] selected driver: kvm2
	I0920 18:55:45.344515  768595 start.go:901] validating driver "kvm2" against &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:55:45.344647  768595 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:55:45.344970  768595 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:55:45.345050  768595 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:55:45.360027  768595 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:55:45.360707  768595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:55:45.360736  768595 cni.go:84] Creating CNI manager for ""
	I0920 18:55:45.360793  768595 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:55:45.360859  768595 start.go:340] cluster config:
	{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:55:45.361009  768595 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:55:45.363552  768595 out.go:177] * Starting "ha-525790" primary control-plane node in "ha-525790" cluster
	I0920 18:55:45.364920  768595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:55:45.364979  768595 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:55:45.364990  768595 cache.go:56] Caching tarball of preloaded images
	I0920 18:55:45.365061  768595 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:55:45.365070  768595 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:55:45.365198  768595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:55:45.365394  768595 start.go:360] acquireMachinesLock for ha-525790: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:55:45.365441  768595 start.go:364] duration metric: took 28.871µs to acquireMachinesLock for "ha-525790"
	I0920 18:55:45.365453  768595 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:55:45.365460  768595 fix.go:54] fixHost starting: 
	I0920 18:55:45.365716  768595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:55:45.365748  768595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:55:45.379754  768595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0920 18:55:45.380277  768595 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:55:45.380763  768595 main.go:141] libmachine: Using API Version  1
	I0920 18:55:45.380778  768595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:55:45.381096  768595 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:55:45.381300  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.381472  768595 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:55:45.382944  768595 fix.go:112] recreateIfNeeded on ha-525790: state=Running err=<nil>
	W0920 18:55:45.382979  768595 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:55:45.384708  768595 out.go:177] * Updating the running kvm2 "ha-525790" VM ...
	I0920 18:55:45.385966  768595 machine.go:93] provisionDockerMachine start ...
	I0920 18:55:45.385981  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.386173  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.388503  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.388933  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.388960  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.389104  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.389273  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.389402  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.389518  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.389711  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.389908  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.389919  768595 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:55:45.492072  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:55:45.492099  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.492366  768595 buildroot.go:166] provisioning hostname "ha-525790"
	I0920 18:55:45.492393  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.492559  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.495258  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.495689  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.495715  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.495923  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.496094  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.496279  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.496427  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.496584  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.496775  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.496788  768595 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790 && echo "ha-525790" | sudo tee /etc/hostname
	I0920 18:55:45.611170  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:55:45.611203  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.613965  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.614392  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.614418  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.614605  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.614780  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.614979  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.615163  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.615334  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.615507  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.615522  768595 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:55:45.716203  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:55:45.716236  768595 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:55:45.716258  768595 buildroot.go:174] setting up certificates
	I0920 18:55:45.716266  768595 provision.go:84] configureAuth start
	I0920 18:55:45.716287  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.716546  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:55:45.719410  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.719789  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.719816  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.720053  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.722137  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.722463  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.722483  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.722613  768595 provision.go:143] copyHostCerts
	I0920 18:55:45.722648  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:55:45.722687  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:55:45.722704  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:55:45.722767  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:55:45.722893  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:55:45.722922  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:55:45.722929  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:55:45.722959  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:55:45.723019  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:55:45.723040  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:55:45.723046  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:55:45.723071  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:55:45.723132  768595 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790 san=[127.0.0.1 192.168.39.149 ha-525790 localhost minikube]
	I0920 18:55:45.874751  768595 provision.go:177] copyRemoteCerts
	I0920 18:55:45.874835  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:55:45.874884  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.877528  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.877971  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.878002  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.878210  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.878387  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.878591  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.878724  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:55:45.960427  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:55:45.960518  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 18:55:45.994757  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:55:45.994865  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:55:46.024642  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:55:46.024718  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:55:46.055496  768595 provision.go:87] duration metric: took 339.216483ms to configureAuth
	I0920 18:55:46.055535  768595 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:55:46.055829  768595 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:46.055929  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:46.058831  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:46.059288  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:46.059324  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:46.059533  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:46.059716  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:46.059891  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:46.060010  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:46.060167  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:46.060375  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:46.060391  768595 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:57:16.901155  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:57:16.901195  768595 machine.go:96] duration metric: took 1m31.515216231s to provisionDockerMachine
	I0920 18:57:16.901213  768595 start.go:293] postStartSetup for "ha-525790" (driver="kvm2")
	I0920 18:57:16.901229  768595 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:57:16.901256  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:16.901619  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:57:16.901655  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:16.904582  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:16.905033  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:16.905077  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:16.905237  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:16.905435  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:16.905596  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:16.905768  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:16.986592  768595 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:57:16.990860  768595 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:57:16.990889  768595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:57:16.990948  768595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:57:16.991031  768595 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:57:16.991042  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:57:16.991128  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:57:17.000970  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:57:17.025421  768595 start.go:296] duration metric: took 124.189503ms for postStartSetup
	I0920 18:57:17.025508  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.025853  768595 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0920 18:57:17.025891  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.028640  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.029043  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.029071  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.029274  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.029491  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.029672  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.029818  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	W0920 18:57:17.109879  768595 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0920 18:57:17.109911  768595 fix.go:56] duration metric: took 1m31.744451562s for fixHost
	I0920 18:57:17.109970  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.112933  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.113331  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.113363  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.113469  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.113648  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.113876  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.114026  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.114184  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:57:17.114401  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:57:17.114415  768595 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:57:17.216062  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858637.181333539
	
	I0920 18:57:17.216090  768595 fix.go:216] guest clock: 1726858637.181333539
	I0920 18:57:17.216101  768595 fix.go:229] Guest: 2024-09-20 18:57:17.181333539 +0000 UTC Remote: 2024-09-20 18:57:17.109918074 +0000 UTC m=+91.872102399 (delta=71.415465ms)
	I0920 18:57:17.216125  768595 fix.go:200] guest clock delta is within tolerance: 71.415465ms
	I0920 18:57:17.216130  768595 start.go:83] releasing machines lock for "ha-525790", held for 1m31.850683513s
	I0920 18:57:17.216152  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.216461  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:57:17.219017  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.219376  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.219412  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.219494  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220012  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220193  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220325  768595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:57:17.220390  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.220399  768595 ssh_runner.go:195] Run: cat /version.json
	I0920 18:57:17.220418  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.222866  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223251  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223284  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.223301  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223449  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.223621  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.223790  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.223811  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223813  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.223960  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.223963  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:17.224104  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.224245  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.224417  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:17.296110  768595 ssh_runner.go:195] Run: systemctl --version
	I0920 18:57:17.321175  768595 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:57:17.477104  768595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:57:17.485831  768595 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:57:17.485914  768595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:57:17.495337  768595 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:57:17.495360  768595 start.go:495] detecting cgroup driver to use...
	I0920 18:57:17.495424  768595 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:57:17.511930  768595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:57:17.525328  768595 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:57:17.525387  768595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:57:17.538722  768595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:57:17.552122  768595 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:57:17.698681  768595 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:57:17.845821  768595 docker.go:233] disabling docker service ...
	I0920 18:57:17.845899  768595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:57:17.863738  768595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:57:17.877401  768595 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:57:18.024631  768595 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:57:18.172584  768595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:57:18.186842  768595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:57:18.205846  768595 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:57:18.205925  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.216288  768595 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:57:18.216358  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.226555  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.237201  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.247630  768595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:57:18.257984  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.267924  768595 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.278978  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.288891  768595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:57:18.297865  768595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:57:18.306911  768595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:57:18.446180  768595 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:57:19.895749  768595 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.449526733s)
	I0920 18:57:19.895791  768595 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:57:19.895837  768595 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:57:19.904678  768595 start.go:563] Will wait 60s for crictl version
	I0920 18:57:19.904743  768595 ssh_runner.go:195] Run: which crictl
	I0920 18:57:19.908608  768595 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:57:19.945193  768595 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:57:19.945279  768595 ssh_runner.go:195] Run: crio --version
	I0920 18:57:19.974543  768595 ssh_runner.go:195] Run: crio --version
	I0920 18:57:20.007822  768595 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:57:20.009139  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:57:20.011764  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:20.012169  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:20.012198  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:20.012388  768595 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:57:20.017342  768595 kubeadm.go:883] updating cluster {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:57:20.017482  768595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:57:20.017559  768595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:57:20.062678  768595 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:57:20.062704  768595 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:57:20.062757  768595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:57:20.098285  768595 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:57:20.098310  768595 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:57:20.098320  768595 kubeadm.go:934] updating node { 192.168.39.149 8443 v1.31.1 crio true true} ...
	I0920 18:57:20.098422  768595 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:57:20.098485  768595 ssh_runner.go:195] Run: crio config
	I0920 18:57:20.146689  768595 cni.go:84] Creating CNI manager for ""
	I0920 18:57:20.146719  768595 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:57:20.146731  768595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:57:20.146762  768595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-525790 NodeName:ha-525790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:57:20.146949  768595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-525790"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:57:20.146969  768595 kube-vip.go:115] generating kube-vip config ...
	I0920 18:57:20.147010  768595 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:57:20.158523  768595 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:57:20.158643  768595 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:57:20.158707  768595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:57:20.168660  768595 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:57:20.168733  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 18:57:20.178461  768595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 18:57:20.198566  768595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:57:20.217954  768595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:57:20.237499  768595 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:57:20.258010  768595 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:57:20.262485  768595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:57:20.407038  768595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:57:20.422336  768595 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.149
	I0920 18:57:20.422365  768595 certs.go:194] generating shared ca certs ...
	I0920 18:57:20.422387  768595 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.422549  768595 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:57:20.422595  768595 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:57:20.422607  768595 certs.go:256] generating profile certs ...
	I0920 18:57:20.422714  768595 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:57:20.422742  768595 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0
	I0920 18:57:20.422758  768595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.246 192.168.39.105 192.168.39.254]
	I0920 18:57:20.498103  768595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 ...
	I0920 18:57:20.498146  768595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0: {Name:mkf1c7de4d51cd00dcbb302f98eb38a12aeaa743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.498349  768595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0 ...
	I0920 18:57:20.498366  768595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0: {Name:mkd16bd720a2c366eb4c3af52495872448237117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.498439  768595 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:57:20.498595  768595 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:57:20.498727  768595 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:57:20.498744  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:57:20.498757  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:57:20.498773  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:57:20.498786  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:57:20.498798  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:57:20.498815  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:57:20.498828  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:57:20.498839  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:57:20.498902  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:57:20.498929  768595 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:57:20.498939  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:57:20.498966  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:57:20.498987  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:57:20.499009  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:57:20.499046  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:57:20.499073  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.499086  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.499098  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.499673  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:57:20.526194  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:57:20.550080  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:57:20.573814  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:57:20.597383  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:57:20.621333  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:57:20.644650  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:57:20.669077  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:57:20.692742  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:57:20.716696  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:57:20.740168  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:57:20.763494  768595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:57:20.779948  768595 ssh_runner.go:195] Run: openssl version
	I0920 18:57:20.785654  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:57:20.796055  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.800308  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.800350  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.805711  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:57:20.814658  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:57:20.825022  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.829283  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.829328  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.835197  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:57:20.844367  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:57:20.858330  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.862698  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.862756  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.868290  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:57:20.877322  768595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:57:20.881726  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:57:20.887174  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:57:20.892568  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:57:20.897933  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:57:20.903504  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:57:20.908964  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:57:20.914297  768595 kubeadm.go:392] StartCluster: {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:57:20.914419  768595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:57:20.914479  768595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:57:20.953634  768595 cri.go:89] found id: "25771bcd68395f46a10f7a984281c99bb335a8ca69efb4245fa13e739f74e880"
	I0920 18:57:20.953663  768595 cri.go:89] found id: "05474c6dd3411b2d54bcdb9c489372dbdd009e7696128a025d961ffa61cea90e"
	I0920 18:57:20.953670  768595 cri.go:89] found id: "fdef47cd693637030df15d12b4203fda70a684a6ba84cf20353b69d3f9314810"
	I0920 18:57:20.953675  768595 cri.go:89] found id: "57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90"
	I0920 18:57:20.953679  768595 cri.go:89] found id: "172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1"
	I0920 18:57:20.953684  768595 cri.go:89] found id: "3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e"
	I0920 18:57:20.953688  768595 cri.go:89] found id: "5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98"
	I0920 18:57:20.953692  768595 cri.go:89] found id: "3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8"
	I0920 18:57:20.953696  768595 cri.go:89] found id: "c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc"
	I0920 18:57:20.953705  768595 cri.go:89] found id: "7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706"
	I0920 18:57:20.953709  768595 cri.go:89] found id: "1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb"
	I0920 18:57:20.953727  768595 cri.go:89] found id: "bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93"
	I0920 18:57:20.953734  768595 cri.go:89] found id: "49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72"
	I0920 18:57:20.953738  768595 cri.go:89] found id: ""
	I0920 18:57:20.953792  768595 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 19:07:08 ha-525790 crio[3621]: time="2024-09-20 19:07:08.969528889Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17a0fc03-fe7b-4ce1-884b-d923bf86ce9f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:08 ha-525790 crio[3621]: time="2024-09-20 19:07:08.970663293Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9074b85c-b5c1-46ca-b007-69bd53c89b42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:08 ha-525790 crio[3621]: time="2024-09-20 19:07:08.971521270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859228971486309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9074b85c-b5c1-46ca-b007-69bd53c89b42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:08 ha-525790 crio[3621]: time="2024-09-20 19:07:08.972071196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5e3be3c-8854-4d4d-ba53-5e130fdfe583 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:08 ha-525790 crio[3621]: time="2024-09-20 19:07:08.972404765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5e3be3c-8854-4d4d-ba53-5e130fdfe583 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:08 ha-525790 crio[3621]: time="2024-09-20 19:07:08.973175155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60cc74ea619fb947f0b84edcc3d897bff8752b038a8f9b1725bd5384cedcaabd,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858739639401417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858689630170928,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5187d6ee59db46aeb3871c648064f41c7129c72b4fb12215dcfa9ff690e3dacc,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858688628099955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561
e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858647533318453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5e3be3c-8854-4d4d-ba53-5e130fdfe583 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:08 ha-525790 crio[3621]: time="2024-09-20 19:07:08.981441836Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=fedb9bee-ef69-478e-ba91-427fc1d9278d name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 19:07:08 ha-525790 crio[3621]: time="2024-09-20 19:07:08.981825971Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-z26jr,Uid:3a3cda3d-ccab-4483-98e6-50d779cc3354,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858680848534710,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:49:50.378606577Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-525790,Uid:250bdcc9f914b29a36cef0bb52cd1ac5,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726858662865131258,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{kubernetes.io/config.hash: 250bdcc9f914b29a36cef0bb52cd1ac5,kubernetes.io/config.seen: 2024-09-20T18:57:20.222918724Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nfnkj,Uid:7994989d-6bfa-4d25-b7b7-662d2e6c742c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647153225091,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-20T18:47:36.440226200Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&PodSandboxMetadata{Name:kube-proxy-958jz,Uid:46603403-eb82-4f15-a1da-da62194a072f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647110804577,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:23.840921604Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rpcds,Uid:7db58219-7147-4a45-b233-ef3c698566ef,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647089120616
,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:36.433422835Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-525790,Uid:b5b17991bc76439c3c561e1834ba5b98,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647068053851,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b5b17991bc76439c3c561e1834ba5b98,kubernetes.io/config
.seen: 2024-09-20T18:47:19.594862954Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-525790,Uid:09c07a212745d10d359109606d1f8e5a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647066139608,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.149:8443,kubernetes.io/config.hash: 09c07a212745d10d359109606d1f8e5a,kubernetes.io/config.seen: 2024-09-20T18:47:19.594859927Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&PodSandboxMetadata{Name:etcd-ha-525790,Uid
:a2b3e6b5917d1f11b27828fbc85076e4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647053085002,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.149:2379,kubernetes.io/config.hash: a2b3e6b5917d1f11b27828fbc85076e4,kubernetes.io/config.seen: 2024-09-20T18:47:19.594856708Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&PodSandboxMetadata{Name:kindnet-9qbm6,Uid:87e8ae18-a561-48ec-9835-27446b6917d3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647030949753,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet
-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:23.865527140Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-525790,Uid:fa36b1aee3057cc6a6644c2a2b2b9582,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647026135836,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fa36b1aee3057cc6a6644c2a2b2b9582,kubernetes.io/config.seen: 2024-09-20T18:47:19.594861884Z,kubernetes.io/config.source: f
ile,},RuntimeHandler:,},&PodSandbox{Id:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ea6bf34f-c1f7-4216-a61f-be30846c991b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858646986059431,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imag
ePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T18:47:36.445299882Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-z26jr,Uid:3a3cda3d-ccab-4483-98e6-50d779cc3354,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858190692240668,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:49:50.378606577Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nfnkj,Uid:7994989d-6bfa-4d25-b7b7-662d2e6c742c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858056748003547,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:36.440226200Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rpcds,Uid:7db58219-7147-4a45-b233-ef3c698566ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858056743924756,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:36.433422835Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&PodSandboxMetadata{Name:kindnet-9qbm6,Uid:87e8ae18-a561-48ec-9835-27446b6917d3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858044173674425,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:23.865527140Z,kubernetes.io/config.source: api,},Runt
imeHandler:,},&PodSandbox{Id:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&PodSandboxMetadata{Name:kube-proxy-958jz,Uid:46603403-eb82-4f15-a1da-da62194a072f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858044156236050,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:23.840921604Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-525790,Uid:b5b17991bc76439c3c561e1834ba5b98,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858032856363299,Labels:map[string]string{component: kube-scheduler,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b5b17991bc76439c3c561e1834ba5b98,kubernetes.io/config.seen: 2024-09-20T18:47:12.380324762Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&PodSandboxMetadata{Name:etcd-ha-525790,Uid:a2b3e6b5917d1f11b27828fbc85076e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858032825529828,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.149:2379,kubernetes.io/config.hash: a2b3e6b5
917d1f11b27828fbc85076e4,kubernetes.io/config.seen: 2024-09-20T18:47:12.380318617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fedb9bee-ef69-478e-ba91-427fc1d9278d name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 19:07:08 ha-525790 crio[3621]: time="2024-09-20 19:07:08.982689232Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75d756aa-c134-4dfe-93f6-a4096bd506bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:08 ha-525790 crio[3621]: time="2024-09-20 19:07:08.982746975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75d756aa-c134-4dfe-93f6-a4096bd506bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:08 ha-525790 crio[3621]: time="2024-09-20 19:07:08.983164481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60cc74ea619fb947f0b84edcc3d897bff8752b038a8f9b1725bd5384cedcaabd,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858739639401417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858689630170928,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5187d6ee59db46aeb3871c648064f41c7129c72b4fb12215dcfa9ff690e3dacc,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858688628099955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561
e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858647533318453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75d756aa-c134-4dfe-93f6-a4096bd506bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.028398441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aeb1f578-f9a5-4d3e-b629-6d087344e6ca name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.028508334Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aeb1f578-f9a5-4d3e-b629-6d087344e6ca name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.030480822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7f75a37-1b6f-4a42-b067-1b6d07674513 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.031103908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859229031077553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7f75a37-1b6f-4a42-b067-1b6d07674513 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.032106744Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5dd94ef4-40f4-456d-bde2-3b9b568fea80 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.032165633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5dd94ef4-40f4-456d-bde2-3b9b568fea80 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.032641107Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60cc74ea619fb947f0b84edcc3d897bff8752b038a8f9b1725bd5384cedcaabd,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858739639401417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858689630170928,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5187d6ee59db46aeb3871c648064f41c7129c72b4fb12215dcfa9ff690e3dacc,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858688628099955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561
e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858647533318453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5dd94ef4-40f4-456d-bde2-3b9b568fea80 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.075480195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9862c56c-0ac3-4c8e-95e5-c23371e3e225 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.075566181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9862c56c-0ac3-4c8e-95e5-c23371e3e225 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.077160475Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8eab73a3-f192-4aa0-960d-37203b8a8419 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.077839733Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859229077813917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8eab73a3-f192-4aa0-960d-37203b8a8419 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.078744469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff1909f3-fdfe-448a-9d9c-1455860a2176 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.078878572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff1909f3-fdfe-448a-9d9c-1455860a2176 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:09 ha-525790 crio[3621]: time="2024-09-20 19:07:09.079399922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60cc74ea619fb947f0b84edcc3d897bff8752b038a8f9b1725bd5384cedcaabd,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858739639401417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858689630170928,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5187d6ee59db46aeb3871c648064f41c7129c72b4fb12215dcfa9ff690e3dacc,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858688628099955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561
e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858647533318453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff1909f3-fdfe-448a-9d9c-1455860a2176 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60cc74ea619fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       4                   16a2a1305a51f       storage-provisioner
	613c15024e982       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago       Running             kube-apiserver            3                   64ba18194b8ce       kube-apiserver-ha-525790
	5187d6ee59db4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       3                   16a2a1305a51f       storage-provisioner
	d017a5b283a90       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      9 minutes ago       Running             kube-controller-manager   2                   4014793ae3deb       kube-controller-manager-ha-525790
	667c79074c454       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      9 minutes ago       Running             busybox                   1                   0a6e91416ea52       busybox-7dff88458-z26jr
	22f33ac894101       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      9 minutes ago       Running             kube-vip                  0                   63cc3aec72e5a       kube-vip-ha-525790
	a2c9c9f659f7c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      9 minutes ago       Running             coredns                   1                   f4deab987a6c3       coredns-7c65d6cfc9-nfnkj
	fefbc436d3eff       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      9 minutes ago       Running             kube-proxy                1                   146e6c4948059       kube-proxy-958jz
	c5c19fcb571e8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      9 minutes ago       Running             etcd                      1                   947865a8625cf       etcd-ha-525790
	a1977c4370e57       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      9 minutes ago       Running             coredns                   1                   dddb1e001fdf1       coredns-7c65d6cfc9-rpcds
	6cf18d395747b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      9 minutes ago       Running             kube-scheduler            1                   f3b7300b04471       kube-scheduler-ha-525790
	041c8157b3922       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      9 minutes ago       Running             kindnet-cni               1                   097a4985f63bc       kindnet-9qbm6
	8a18e65180dd2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      9 minutes ago       Exited              kube-apiserver            2                   64ba18194b8ce       kube-apiserver-ha-525790
	231315ec7d013       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      9 minutes ago       Exited              kube-controller-manager   1                   4014793ae3deb       kube-controller-manager-ha-525790
	344b03b51dddb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   17 minutes ago      Exited              busybox                   0                   125671e39b996       busybox-7dff88458-z26jr
	172e8f75d2a84       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Exited              coredns                   0                   5dbd6acffd5c5       coredns-7c65d6cfc9-nfnkj
	3dff404b6ad2a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Exited              coredns                   0                   34517f9f64c86       coredns-7c65d6cfc9-rpcds
	5579930bef0fc       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      19 minutes ago      Exited              kindnet-cni               0                   64136f65f6d34       kindnet-9qbm6
	3d469134674c2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      19 minutes ago      Exited              kube-proxy                0                   2e440a5ac73b7       kube-proxy-958jz
	7d0496391eb85       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      19 minutes ago      Exited              kube-scheduler            0                   fae09dfcf3d6f       kube-scheduler-ha-525790
	bcca29b119984       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Exited              etcd                      0                   17818940c2036       etcd-ha-525790
	
	
	==> coredns [172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1] <==
	[INFO] 10.244.1.2:49534 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164113s
	[INFO] 10.244.2.2:50032 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167479s
	[INFO] 10.244.2.2:33413 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001865571s
	[INFO] 10.244.0.4:38374 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010475s
	[INFO] 10.244.0.4:44676 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170058s
	[INFO] 10.244.0.4:54182 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123082s
	[INFO] 10.244.0.4:52067 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108075s
	[INFO] 10.244.1.2:36885 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133944s
	[INFO] 10.244.2.2:48327 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127372s
	[INFO] 10.244.2.2:52262 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160755s
	[INFO] 10.244.0.4:44171 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111758s
	[INFO] 10.244.1.2:36220 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196033s
	[INFO] 10.244.1.2:33859 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222322s
	[INFO] 10.244.1.2:55349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158431s
	[INFO] 10.244.2.2:37976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138385s
	[INFO] 10.244.2.2:56378 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000191303s
	[INFO] 10.244.2.2:54246 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117607s
	[INFO] 10.244.0.4:53115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116565s
	[INFO] 10.244.0.4:49608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095821s
	[INFO] 10.244.0.4:60862 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111997s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e] <==
	[INFO] 10.244.2.2:42750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000311517s
	[INFO] 10.244.2.2:42748 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319529s
	[INFO] 10.244.2.2:49203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190348s
	[INFO] 10.244.2.2:44849 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019366s
	[INFO] 10.244.2.2:52186 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103082s
	[INFO] 10.244.0.4:58300 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140735s
	[INFO] 10.244.0.4:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001702673s
	[INFO] 10.244.0.4:33721 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001170599s
	[INFO] 10.244.0.4:42180 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061647s
	[INFO] 10.244.1.2:49177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333372s
	[INFO] 10.244.1.2:57192 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147894s
	[INFO] 10.244.1.2:59125 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095482s
	[INFO] 10.244.2.2:50879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019818s
	[INFO] 10.244.2.2:47467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096359s
	[INFO] 10.244.0.4:54464 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087148s
	[INFO] 10.244.0.4:40326 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011895s
	[INFO] 10.244.0.4:46142 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071583s
	[INFO] 10.244.1.2:50168 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000224622s
	[INFO] 10.244.2.2:50611 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117577s
	[INFO] 10.244.0.4:57391 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000320119s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1717&timeout=7m26s&timeoutSeconds=446&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1736&timeout=7m56s&timeoutSeconds=476&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1713&timeout=6m15s&timeoutSeconds=375&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[211372876]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 18:57:36.158) (total time: 10001ms):
	Trace[211372876]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:57:46.159)
	Trace[211372876]: [10.001611713s] [10.001611713s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33172->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33172->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[159325140]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 18:57:32.397) (total time: 10001ms):
	Trace[159325140]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:57:42.399)
	Trace[159325140]: [10.001577176s] [10.001577176s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:56710->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:56710->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56726->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56726->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-525790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_47_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:47:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:07:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:03:15 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:03:15 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:03:15 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:03:15 +0000   Fri, 20 Sep 2024 18:47:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-525790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3f2b96a8819496a94e034cf4adf7a85
	  System UUID:                d3f2b96a-8819-496a-94e0-34cf4adf7a85
	  Boot ID:                    02f79ecd-567f-4683-83ce-59afb46feab6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z26jr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-nfnkj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 coredns-7c65d6cfc9-rpcds             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 etcd-ha-525790                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-9qbm6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-525790             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-525790    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-958jz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-525790             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-525790                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 19m                  kube-proxy       
	  Normal   Starting                 9m                   kube-proxy       
	  Normal   NodeHasSufficientPID     19m                  kubelet          Node ha-525790 status is now: NodeHasSufficientPID
	  Normal   Starting                 19m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  19m                  kubelet          Node ha-525790 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                  kubelet          Node ha-525790 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           19m                  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal   NodeReady                19m                  kubelet          Node ha-525790 status is now: NodeReady
	  Normal   RegisteredNode           18m                  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal   RegisteredNode           17m                  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal   NodeNotReady             10m (x3 over 10m)    kubelet          Node ha-525790 status is now: NodeNotReady
	  Warning  ContainerGCFailed        9m50s (x2 over 10m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           9m4s                 node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal   RegisteredNode           8m55s                node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	
	
	Name:               ha-525790-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_48_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:48:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:07:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:04:00 +0000   Fri, 20 Sep 2024 18:58:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:04:00 +0000   Fri, 20 Sep 2024 18:58:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:04:00 +0000   Fri, 20 Sep 2024 18:58:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:04:00 +0000   Fri, 20 Sep 2024 18:58:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-525790-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dbde4511fc24bbcb1281f7b7d6ff24f
	  System UUID:                1dbde451-1fc2-4bbc-b128-1f7b7d6ff24f
	  Boot ID:                    d5658712-0cd7-4a8d-96e7-dd80ca41efeb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7jtss                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-525790-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-8glgp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      18m
	  kube-system                 kube-apiserver-ha-525790-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-525790-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-sspfs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-525790-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-525790-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 8m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)      kubelet          Node ha-525790-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)      kubelet          Node ha-525790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)      kubelet          Node ha-525790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                    node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  NodeNotReady             15m                    node-controller  Node ha-525790-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node ha-525790-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node ha-525790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet          Node ha-525790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m4s                   node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           8m55s                  node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	
	
	Name:               ha-525790-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_50_26_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:50:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:53:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:58:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:58:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:58:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:58:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-525790-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c58d814e5e5d49b699d9f977eb54ff58
	  System UUID:                c58d814e-5e5d-49b6-99d9-f977eb54ff58
	  Boot ID:                    69924ac5-b6f2-4ddd-bd0d-fa3c683681d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-df8hf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-proxy-w98cx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x2 over 16m)  kubelet          Node ha-525790-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x2 over 16m)  kubelet          Node ha-525790-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x2 over 16m)  kubelet          Node ha-525790-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  NodeReady                16m                kubelet          Node ha-525790-m04 status is now: NodeReady
	  Normal  RegisteredNode           9m4s               node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           8m55s              node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  NodeNotReady             8m24s              node-controller  Node ha-525790-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep20 18:47] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.053987] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058272] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.180542] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.143015] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.280287] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.923962] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +3.905808] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.064972] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.290695] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.091789] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.472602] kauditd_printk_skb: 36 callbacks suppressed
	[ +11.974718] kauditd_printk_skb: 23 callbacks suppressed
	[Sep20 18:48] kauditd_printk_skb: 24 callbacks suppressed
	[Sep20 18:57] systemd-fstab-generator[3539]: Ignoring "noauto" option for root device
	[  +0.144284] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +0.175904] systemd-fstab-generator[3567]: Ignoring "noauto" option for root device
	[  +0.158366] systemd-fstab-generator[3579]: Ignoring "noauto" option for root device
	[  +0.266903] systemd-fstab-generator[3607]: Ignoring "noauto" option for root device
	[  +1.960904] systemd-fstab-generator[3709]: Ignoring "noauto" option for root device
	[  +6.729271] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.246162] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.067162] kauditd_printk_skb: 1 callbacks suppressed
	[Sep20 18:58] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93] <==
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T18:55:46.260496Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.149:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:55:46.260580Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.149:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:55:46.260677Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ba3e3e863cacc4d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-20T18:55:46.260889Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.260941Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.260980Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261097Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261203Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261373Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261477Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261501Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261581Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261695Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261817Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261878Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261957Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261987Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.264966Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"warn","ts":"2024-09-20T18:55:46.265043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.834260925s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-20T18:55:46.265109Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-09-20T18:55:46.265138Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-525790","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.149:2380"],"advertise-client-urls":["https://192.168.39.149:2379"]}
	
	
	==> etcd [c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1] <==
	{"level":"warn","ts":"2024-09-20T19:06:48.865957Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:51.451602Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:51.451671Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:53.866413Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:53.866484Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:55.453403Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:55.453528Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:58.867162Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:58.867211Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:59.455812Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:59.455942Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:07:03.458249Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:07:03.458445Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:07:03.867718Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:07:03.867757Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-20T19:07:06.307761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d switched to configuration voters=(838764542867197005 14475483127073695793)"}
	{"level":"info","ts":"2024-09-20T19:07:06.313082Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","removed-remote-peer-id":"6c1c1087b613d98","removed-remote-peer-urls":["https://192.168.39.105:2380"]}
	{"level":"info","ts":"2024-09-20T19:07:06.313184Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313243Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313346Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313440Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313502Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313533Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313548Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313562Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"ba3e3e863cacc4d","removed-remote-peer-id":"6c1c1087b613d98"}
	
	
	==> kernel <==
	 19:07:09 up 20 min,  0 users,  load average: 1.22, 0.53, 0.37
	Linux ha-525790 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4] <==
	I0920 19:06:38.905786       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 19:06:38.905836       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:06:38.905865       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:06:48.912129       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 19:06:48.912193       1 main.go:299] handling current node
	I0920 19:06:48.912215       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 19:06:48.912224       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 19:06:48.912465       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 19:06:48.912503       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 19:06:48.912579       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:06:48.912587       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:06:58.914181       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 19:06:58.914363       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 19:06:58.914504       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:06:58.914530       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:06:58.914610       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 19:06:58.914630       1 main.go:299] handling current node
	I0920 19:06:58.914657       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 19:06:58.914674       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 19:07:08.906374       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:07:08.906419       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:07:08.906610       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 19:07:08.906618       1 main.go:299] handling current node
	I0920 19:07:08.906636       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 19:07:08.906640       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98] <==
	I0920 18:55:25.880711       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:55:25.880821       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:55:25.880982       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:55:25.881006       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:55:25.881062       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:55:25.881173       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 18:55:25.881330       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:55:25.881366       1 main.go:299] handling current node
	I0920 18:55:35.880574       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:55:35.880753       1 main.go:299] handling current node
	I0920 18:55:35.880788       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:55:35.880809       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:55:35.880968       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:55:35.881037       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:55:35.881188       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:55:35.881225       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	E0920 18:55:44.415378       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	I0920 18:55:45.880519       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:55:45.880573       1 main.go:299] handling current node
	I0920 18:55:45.880594       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:55:45.880614       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:55:45.880735       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:55:45.880740       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:55:45.880784       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:55:45.880788       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6] <==
	I0920 18:58:11.563790       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 18:58:11.566669       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:58:11.647086       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:58:11.649520       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:58:11.649641       1 policy_source.go:224] refreshing policies
	I0920 18:58:11.663884       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:58:11.678760       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 18:58:11.678792       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:58:11.679064       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:58:11.679104       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:58:11.679119       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:58:11.679124       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:58:11.679129       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:58:11.711573       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0920 18:58:11.731002       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I0920 18:58:11.733911       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:58:11.740737       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:58:11.743338       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:58:11.744337       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:58:11.750631       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 18:58:11.753464       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 18:58:11.759128       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0920 18:58:11.762522       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0920 18:58:12.550864       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 18:58:13.073998       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149 192.168.39.246]
	
	
	==> kube-apiserver [8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40] <==
	I0920 18:57:28.372014       1 options.go:228] external host was not specified, using 192.168.39.149
	I0920 18:57:28.378488       1 server.go:142] Version: v1.31.1
	I0920 18:57:28.378636       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:57:28.839362       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0920 18:57:28.848920       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 18:57:28.849194       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 18:57:28.849607       1 instance.go:232] Using reconciler: lease
	I0920 18:57:28.850178       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0920 18:57:48.834604       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0920 18:57:48.834768       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0920 18:57:48.850633       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829] <==
	I0920 18:57:29.220757       1 serving.go:386] Generated self-signed cert in-memory
	I0920 18:57:29.591669       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0920 18:57:29.591756       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:57:29.593495       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:57:29.593650       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 18:57:29.593717       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0920 18:57:29.593811       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0920 18:57:49.858234       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.149:8443/healthz\": dial tcp 192.168.39.149:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38] <==
	I0920 18:58:49.788903       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:58:50.566798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:58:53.132790       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 18:58:59.474045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:58:59.492472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:58:59.712336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:59:00.361226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="74.989µs"
	I0920 18:59:00.746940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:59:11.733735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.059122ms"
	I0920 18:59:11.735538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.244µs"
	I0920 18:59:29.887985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 19:03:15.447982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790"
	I0920 19:04:00.014795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 19:04:36.164186       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 19:07:02.917240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 19:07:02.944139       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 19:07:03.006373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.585632ms"
	I0920 19:07:03.064470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.042822ms"
	I0920 19:07:03.077715       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.143351ms"
	I0920 19:07:03.078101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.13µs"
	I0920 19:07:05.091941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.932µs"
	I0920 19:07:05.251002       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="134.782µs"
	I0920 19:07:05.255355       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="142.045µs"
	I0920 19:07:06.927218       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	E0920 19:07:06.970503       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-525790-m03\", UID:\"4e101ad6-fea3-4e7b-b427-b332dd32130b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32
{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-525790-m03\", UID:\"ffd11555-2f09-4ba2-b423-72f88865bfc4\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-525790-m03\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8] <==
	E0920 18:54:29.445959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:32.517585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:32.517837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:32.517758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:32.518080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:38.533704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:38.533801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:44.681435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:44.681569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:47.750519       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:47.750703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:50.823395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:50.823514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:03.112602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:03.112697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:06.182242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:06.182543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:12.325811       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:12.325963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:30.759875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:30.760470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:46.118426       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:46.118557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:46.118676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:46.118740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:57:30.565761       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 18:57:33.640050       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 18:57:36.709752       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 18:57:42.859625       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 18:57:52.070869       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0920 18:58:09.389517       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.149"]
	E0920 18:58:09.389799       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:58:09.436483       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:58:09.436606       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:58:09.436666       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:58:09.441038       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:58:09.441633       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:58:09.442010       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:58:09.446213       1 config.go:199] "Starting service config controller"
	I0920 18:58:09.446402       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:58:09.446500       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:58:09.446578       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:58:09.448018       1 config.go:328] "Starting node config controller"
	I0920 18:58:09.448140       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:58:09.548198       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:58:09.548215       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:58:09.548482       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a] <==
	W0920 18:58:04.819433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.149:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:04.819512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.149:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:05.565781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.149:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:05.565894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.149:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:06.403484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.149:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:06.403602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.149:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:06.898106       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.149:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:06.898186       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.149:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:06.917971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.149:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:06.918035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.149:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:07.845992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.149:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:07.846092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.149:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:08.543222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.149:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:08.543360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.149:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:08.927724       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.149:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:08.927853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.149:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:09.205871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.149:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:09.205944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.149:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:11.576509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:58:11.576628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:58:11.576905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:58:11.577007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:58:11.577232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:58:11.577486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 18:58:30.575744       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706] <==
	I0920 18:50:26.263699       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w98cx" node="ha-525790-m04"
	E0920 18:50:26.297985       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298064       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9ff40332-cdad-4e9f-99ca-28d1271713a8(kube-system/kindnet-hwgsh) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-hwgsh"
	E0920 18:50:26.298079       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" pod="kube-system/kindnet-hwgsh"
	I0920 18:50:26.298095       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298461       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	E0920 18:50:26.298512       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 340d5abf-2e79-4cc0-8f1f-130c1e176259(kube-system/kube-proxy-rh89s) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rh89s"
	E0920 18:50:26.298529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" pod="kube-system/kube-proxy-rh89s"
	I0920 18:50:26.298548       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	E0920 18:55:33.838133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0920 18:55:34.010012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:34.163933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:35.197228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:38.126361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0920 18:55:38.323639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0920 18:55:39.518704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0920 18:55:39.859524       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0920 18:55:40.061646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:40.131765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0920 18:55:41.452147       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0920 18:55:43.449377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0920 18:55:44.076439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0920 18:55:44.277830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0920 18:55:45.932626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0920 18:55:46.167365       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 20 19:05:39 ha-525790 kubelet[1305]: E0920 19:05:39.997200    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859139996916490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:05:39 ha-525790 kubelet[1305]: E0920 19:05:39.997304    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859139996916490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:05:49 ha-525790 kubelet[1305]: E0920 19:05:49.999055    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859149998696986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:05:49 ha-525790 kubelet[1305]: E0920 19:05:49.999377    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859149998696986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:00 ha-525790 kubelet[1305]: E0920 19:06:00.001953    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859160001440634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:00 ha-525790 kubelet[1305]: E0920 19:06:00.001978    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859160001440634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:10 ha-525790 kubelet[1305]: E0920 19:06:10.003368    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859170002995065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:10 ha-525790 kubelet[1305]: E0920 19:06:10.003451    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859170002995065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:19 ha-525790 kubelet[1305]: E0920 19:06:19.637558    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 19:06:19 ha-525790 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 19:06:19 ha-525790 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 19:06:19 ha-525790 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 19:06:19 ha-525790 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 19:06:20 ha-525790 kubelet[1305]: E0920 19:06:20.006100    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859180005668944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:20 ha-525790 kubelet[1305]: E0920 19:06:20.006127    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859180005668944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:30 ha-525790 kubelet[1305]: E0920 19:06:30.007559    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859190007239612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:30 ha-525790 kubelet[1305]: E0920 19:06:30.007607    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859190007239612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:40 ha-525790 kubelet[1305]: E0920 19:06:40.010351    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859200009651480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:40 ha-525790 kubelet[1305]: E0920 19:06:40.010404    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859200009651480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:50 ha-525790 kubelet[1305]: E0920 19:06:50.012654    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859210012010968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:50 ha-525790 kubelet[1305]: E0920 19:06:50.013017    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859210012010968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:07:00 ha-525790 kubelet[1305]: E0920 19:07:00.015090    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859220014798886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:07:00 ha-525790 kubelet[1305]: E0920 19:07:00.015116    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859220014798886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:07:10 ha-525790 kubelet[1305]: E0920 19:07:10.017892    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859230016933324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:07:10 ha-525790 kubelet[1305]: E0920 19:07:10.017923    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859230016933324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:07:08.645945  771596 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19678-739831/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-525790 -n ha-525790
helpers_test.go:261: (dbg) Run:  kubectl --context ha-525790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-fcds6 etcd-ha-525790-m03 kube-controller-manager-ha-525790-m03 kube-vip-ha-525790-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-525790 describe pod busybox-7dff88458-fcds6 etcd-ha-525790-m03 kube-controller-manager-ha-525790-m03 kube-vip-ha-525790-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-525790 describe pod busybox-7dff88458-fcds6 etcd-ha-525790-m03 kube-controller-manager-ha-525790-m03 kube-vip-ha-525790-m03: exit status 1 (87.424319ms)

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-fcds6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l7bnh (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-l7bnh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age              From               Message
	  ----     ------            ----             ----               -------
	  Warning  FailedScheduling  7s               default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  5s               default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  5s (x2 over 7s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "etcd-ha-525790-m03" not found
	Error from server (NotFound): pods "kube-controller-manager-ha-525790-m03" not found
	Error from server (NotFound): pods "kube-vip-ha-525790-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-525790 describe pod busybox-7dff88458-fcds6 etcd-ha-525790-m03 kube-controller-manager-ha-525790-m03 kube-vip-ha-525790-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (8.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-525790" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-525790\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-525790\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\
":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-525790\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.149\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.246\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.181\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":fals
e,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMe
trics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-525790 -n ha-525790
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-525790 logs -n 25: (1.601644389s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m04 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp testdata/cp-test.txt                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790:/home/docker/cp-test_ha-525790-m04_ha-525790.txt                       |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790 sudo cat                                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790.txt                                 |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m02:/home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03:/home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m03 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-525790 node stop m02 -v=7                                                     | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-525790 node start m02 -v=7                                                    | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-525790 -v=7                                                           | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-525790 -v=7                                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-525790 --wait=true -v=7                                                    | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-525790                                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 19:06 UTC |                     |
	| node    | ha-525790 node delete m03 -v=7                                                   | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 19:07 UTC | 20 Sep 24 19:07 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:55:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:55:45.275296  768595 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:55:45.275412  768595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:45.275421  768595 out.go:358] Setting ErrFile to fd 2...
	I0920 18:55:45.275425  768595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:45.275635  768595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:55:45.276210  768595 out.go:352] Setting JSON to false
	I0920 18:55:45.277141  768595 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9495,"bootTime":1726849050,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:55:45.277240  768595 start.go:139] virtualization: kvm guest
	I0920 18:55:45.279445  768595 out.go:177] * [ha-525790] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:55:45.280764  768595 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:55:45.280835  768595 notify.go:220] Checking for updates...
	I0920 18:55:45.283366  768595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:55:45.284696  768595 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:55:45.285940  768595 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:55:45.287169  768595 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:55:45.288409  768595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:55:45.290193  768595 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:45.290315  768595 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:55:45.290797  768595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:55:45.290891  768595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:55:45.306404  768595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37931
	I0920 18:55:45.306820  768595 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:55:45.307492  768595 main.go:141] libmachine: Using API Version  1
	I0920 18:55:45.307521  768595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:55:45.307939  768595 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:55:45.308132  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.343272  768595 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:55:45.344502  768595 start.go:297] selected driver: kvm2
	I0920 18:55:45.344515  768595 start.go:901] validating driver "kvm2" against &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:55:45.344647  768595 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:55:45.344970  768595 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:55:45.345050  768595 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:55:45.360027  768595 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:55:45.360707  768595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:55:45.360736  768595 cni.go:84] Creating CNI manager for ""
	I0920 18:55:45.360793  768595 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:55:45.360859  768595 start.go:340] cluster config:
	{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:55:45.361009  768595 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:55:45.363552  768595 out.go:177] * Starting "ha-525790" primary control-plane node in "ha-525790" cluster
	I0920 18:55:45.364920  768595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:55:45.364979  768595 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:55:45.364990  768595 cache.go:56] Caching tarball of preloaded images
	I0920 18:55:45.365061  768595 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:55:45.365070  768595 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:55:45.365198  768595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:55:45.365394  768595 start.go:360] acquireMachinesLock for ha-525790: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:55:45.365441  768595 start.go:364] duration metric: took 28.871µs to acquireMachinesLock for "ha-525790"
	I0920 18:55:45.365453  768595 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:55:45.365460  768595 fix.go:54] fixHost starting: 
	I0920 18:55:45.365716  768595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:55:45.365748  768595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:55:45.379754  768595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0920 18:55:45.380277  768595 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:55:45.380763  768595 main.go:141] libmachine: Using API Version  1
	I0920 18:55:45.380778  768595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:55:45.381096  768595 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:55:45.381300  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.381472  768595 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:55:45.382944  768595 fix.go:112] recreateIfNeeded on ha-525790: state=Running err=<nil>
	W0920 18:55:45.382979  768595 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:55:45.384708  768595 out.go:177] * Updating the running kvm2 "ha-525790" VM ...
	I0920 18:55:45.385966  768595 machine.go:93] provisionDockerMachine start ...
	I0920 18:55:45.385981  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.386173  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.388503  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.388933  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.388960  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.389104  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.389273  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.389402  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.389518  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.389711  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.389908  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.389919  768595 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:55:45.492072  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:55:45.492099  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.492366  768595 buildroot.go:166] provisioning hostname "ha-525790"
	I0920 18:55:45.492393  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.492559  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.495258  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.495689  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.495715  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.495923  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.496094  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.496279  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.496427  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.496584  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.496775  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.496788  768595 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790 && echo "ha-525790" | sudo tee /etc/hostname
	I0920 18:55:45.611170  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:55:45.611203  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.613965  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.614392  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.614418  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.614605  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.614780  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.614979  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.615163  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.615334  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.615507  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.615522  768595 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:55:45.716203  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:55:45.716236  768595 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:55:45.716258  768595 buildroot.go:174] setting up certificates
	I0920 18:55:45.716266  768595 provision.go:84] configureAuth start
	I0920 18:55:45.716287  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.716546  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:55:45.719410  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.719789  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.719816  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.720053  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.722137  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.722463  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.722483  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.722613  768595 provision.go:143] copyHostCerts
	I0920 18:55:45.722648  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:55:45.722687  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:55:45.722704  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:55:45.722767  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:55:45.722893  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:55:45.722922  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:55:45.722929  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:55:45.722959  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:55:45.723019  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:55:45.723040  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:55:45.723046  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:55:45.723071  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:55:45.723132  768595 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790 san=[127.0.0.1 192.168.39.149 ha-525790 localhost minikube]
	I0920 18:55:45.874751  768595 provision.go:177] copyRemoteCerts
	I0920 18:55:45.874835  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:55:45.874884  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.877528  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.877971  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.878002  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.878210  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.878387  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.878591  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.878724  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:55:45.960427  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:55:45.960518  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 18:55:45.994757  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:55:45.994865  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:55:46.024642  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:55:46.024718  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:55:46.055496  768595 provision.go:87] duration metric: took 339.216483ms to configureAuth
	I0920 18:55:46.055535  768595 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:55:46.055829  768595 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:46.055929  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:46.058831  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:46.059288  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:46.059324  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:46.059533  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:46.059716  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:46.059891  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:46.060010  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:46.060167  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:46.060375  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:46.060391  768595 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:57:16.901155  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:57:16.901195  768595 machine.go:96] duration metric: took 1m31.515216231s to provisionDockerMachine
	I0920 18:57:16.901213  768595 start.go:293] postStartSetup for "ha-525790" (driver="kvm2")
	I0920 18:57:16.901229  768595 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:57:16.901256  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:16.901619  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:57:16.901655  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:16.904582  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:16.905033  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:16.905077  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:16.905237  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:16.905435  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:16.905596  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:16.905768  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:16.986592  768595 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:57:16.990860  768595 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:57:16.990889  768595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:57:16.990948  768595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:57:16.991031  768595 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:57:16.991042  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:57:16.991128  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:57:17.000970  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:57:17.025421  768595 start.go:296] duration metric: took 124.189503ms for postStartSetup
	I0920 18:57:17.025508  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.025853  768595 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0920 18:57:17.025891  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.028640  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.029043  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.029071  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.029274  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.029491  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.029672  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.029818  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	W0920 18:57:17.109879  768595 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0920 18:57:17.109911  768595 fix.go:56] duration metric: took 1m31.744451562s for fixHost
	I0920 18:57:17.109970  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.112933  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.113331  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.113363  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.113469  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.113648  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.113876  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.114026  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.114184  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:57:17.114401  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:57:17.114415  768595 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:57:17.216062  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858637.181333539
	
	I0920 18:57:17.216090  768595 fix.go:216] guest clock: 1726858637.181333539
	I0920 18:57:17.216101  768595 fix.go:229] Guest: 2024-09-20 18:57:17.181333539 +0000 UTC Remote: 2024-09-20 18:57:17.109918074 +0000 UTC m=+91.872102399 (delta=71.415465ms)
	I0920 18:57:17.216125  768595 fix.go:200] guest clock delta is within tolerance: 71.415465ms
	I0920 18:57:17.216130  768595 start.go:83] releasing machines lock for "ha-525790", held for 1m31.850683513s
	I0920 18:57:17.216152  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.216461  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:57:17.219017  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.219376  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.219412  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.219494  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220012  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220193  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220325  768595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:57:17.220390  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.220399  768595 ssh_runner.go:195] Run: cat /version.json
	I0920 18:57:17.220418  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.222866  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223251  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223284  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.223301  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223449  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.223621  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.223790  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.223811  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223813  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.223960  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.223963  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:17.224104  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.224245  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.224417  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:17.296110  768595 ssh_runner.go:195] Run: systemctl --version
	I0920 18:57:17.321175  768595 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:57:17.477104  768595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:57:17.485831  768595 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:57:17.485914  768595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:57:17.495337  768595 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:57:17.495360  768595 start.go:495] detecting cgroup driver to use...
	I0920 18:57:17.495424  768595 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:57:17.511930  768595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:57:17.525328  768595 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:57:17.525387  768595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:57:17.538722  768595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:57:17.552122  768595 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:57:17.698681  768595 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:57:17.845821  768595 docker.go:233] disabling docker service ...
	I0920 18:57:17.845899  768595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:57:17.863738  768595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:57:17.877401  768595 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:57:18.024631  768595 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:57:18.172584  768595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:57:18.186842  768595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:57:18.205846  768595 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:57:18.205925  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.216288  768595 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:57:18.216358  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.226555  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.237201  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.247630  768595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:57:18.257984  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.267924  768595 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.278978  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.288891  768595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:57:18.297865  768595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:57:18.306911  768595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:57:18.446180  768595 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:57:19.895749  768595 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.449526733s)
	I0920 18:57:19.895791  768595 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:57:19.895837  768595 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:57:19.904678  768595 start.go:563] Will wait 60s for crictl version
	I0920 18:57:19.904743  768595 ssh_runner.go:195] Run: which crictl
	I0920 18:57:19.908608  768595 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:57:19.945193  768595 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:57:19.945279  768595 ssh_runner.go:195] Run: crio --version
	I0920 18:57:19.974543  768595 ssh_runner.go:195] Run: crio --version
	I0920 18:57:20.007822  768595 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:57:20.009139  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:57:20.011764  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:20.012169  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:20.012198  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:20.012388  768595 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:57:20.017342  768595 kubeadm.go:883] updating cluster {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:57:20.017482  768595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:57:20.017559  768595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:57:20.062678  768595 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:57:20.062704  768595 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:57:20.062757  768595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:57:20.098285  768595 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:57:20.098310  768595 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:57:20.098320  768595 kubeadm.go:934] updating node { 192.168.39.149 8443 v1.31.1 crio true true} ...
	I0920 18:57:20.098422  768595 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:57:20.098485  768595 ssh_runner.go:195] Run: crio config
	I0920 18:57:20.146689  768595 cni.go:84] Creating CNI manager for ""
	I0920 18:57:20.146719  768595 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:57:20.146731  768595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:57:20.146762  768595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-525790 NodeName:ha-525790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:57:20.146949  768595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-525790"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:57:20.146969  768595 kube-vip.go:115] generating kube-vip config ...
	I0920 18:57:20.147010  768595 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:57:20.158523  768595 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:57:20.158643  768595 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:57:20.158707  768595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:57:20.168660  768595 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:57:20.168733  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 18:57:20.178461  768595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 18:57:20.198566  768595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:57:20.217954  768595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:57:20.237499  768595 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:57:20.258010  768595 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:57:20.262485  768595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:57:20.407038  768595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:57:20.422336  768595 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.149
	I0920 18:57:20.422365  768595 certs.go:194] generating shared ca certs ...
	I0920 18:57:20.422387  768595 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.422549  768595 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:57:20.422595  768595 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:57:20.422607  768595 certs.go:256] generating profile certs ...
	I0920 18:57:20.422714  768595 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:57:20.422742  768595 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0
	I0920 18:57:20.422758  768595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.246 192.168.39.105 192.168.39.254]
	I0920 18:57:20.498103  768595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 ...
	I0920 18:57:20.498146  768595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0: {Name:mkf1c7de4d51cd00dcbb302f98eb38a12aeaa743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.498349  768595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0 ...
	I0920 18:57:20.498366  768595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0: {Name:mkd16bd720a2c366eb4c3af52495872448237117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.498439  768595 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:57:20.498595  768595 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:57:20.498727  768595 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:57:20.498744  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:57:20.498757  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:57:20.498773  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:57:20.498786  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:57:20.498798  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:57:20.498815  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:57:20.498828  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:57:20.498839  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:57:20.498902  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:57:20.498929  768595 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:57:20.498939  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:57:20.498966  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:57:20.498987  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:57:20.499009  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:57:20.499046  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:57:20.499073  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.499086  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.499098  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.499673  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:57:20.526194  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:57:20.550080  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:57:20.573814  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:57:20.597383  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:57:20.621333  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:57:20.644650  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:57:20.669077  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:57:20.692742  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:57:20.716696  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:57:20.740168  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:57:20.763494  768595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:57:20.779948  768595 ssh_runner.go:195] Run: openssl version
	I0920 18:57:20.785654  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:57:20.796055  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.800308  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.800350  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.805711  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:57:20.814658  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:57:20.825022  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.829283  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.829328  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.835197  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:57:20.844367  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:57:20.858330  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.862698  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.862756  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.868290  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:57:20.877322  768595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:57:20.881726  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:57:20.887174  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:57:20.892568  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:57:20.897933  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:57:20.903504  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:57:20.908964  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:57:20.914297  768595 kubeadm.go:392] StartCluster: {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:57:20.914419  768595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:57:20.914479  768595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:57:20.953634  768595 cri.go:89] found id: "25771bcd68395f46a10f7a984281c99bb335a8ca69efb4245fa13e739f74e880"
	I0920 18:57:20.953663  768595 cri.go:89] found id: "05474c6dd3411b2d54bcdb9c489372dbdd009e7696128a025d961ffa61cea90e"
	I0920 18:57:20.953670  768595 cri.go:89] found id: "fdef47cd693637030df15d12b4203fda70a684a6ba84cf20353b69d3f9314810"
	I0920 18:57:20.953675  768595 cri.go:89] found id: "57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90"
	I0920 18:57:20.953679  768595 cri.go:89] found id: "172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1"
	I0920 18:57:20.953684  768595 cri.go:89] found id: "3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e"
	I0920 18:57:20.953688  768595 cri.go:89] found id: "5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98"
	I0920 18:57:20.953692  768595 cri.go:89] found id: "3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8"
	I0920 18:57:20.953696  768595 cri.go:89] found id: "c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc"
	I0920 18:57:20.953705  768595 cri.go:89] found id: "7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706"
	I0920 18:57:20.953709  768595 cri.go:89] found id: "1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb"
	I0920 18:57:20.953727  768595 cri.go:89] found id: "bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93"
	I0920 18:57:20.953734  768595 cri.go:89] found id: "49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72"
	I0920 18:57:20.953738  768595 cri.go:89] found id: ""
	I0920 18:57:20.953792  768595 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.773140959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859231773106868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9a3b92f-0dec-4f6e-bf3c-5197060653e2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.773804453Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8afe8169-4069-46f8-9fa8-8c63c90598d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.773879121Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8afe8169-4069-46f8-9fa8-8c63c90598d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.774586713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60cc74ea619fb947f0b84edcc3d897bff8752b038a8f9b1725bd5384cedcaabd,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858739639401417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858689630170928,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5187d6ee59db46aeb3871c648064f41c7129c72b4fb12215dcfa9ff690e3dacc,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858688628099955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561
e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858647533318453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8afe8169-4069-46f8-9fa8-8c63c90598d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.815115822Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=885e1bf0-7435-4964-b90e-0760e3ac8a1f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.815185191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=885e1bf0-7435-4964-b90e-0760e3ac8a1f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.816114383Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a08ded66-978e-4d55-a13e-a37eb31660c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.816694465Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859231816668390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a08ded66-978e-4d55-a13e-a37eb31660c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.817304020Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9a45366-42ea-48c1-9df7-1c3f5d902d2d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.817356068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9a45366-42ea-48c1-9df7-1c3f5d902d2d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.817758394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60cc74ea619fb947f0b84edcc3d897bff8752b038a8f9b1725bd5384cedcaabd,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858739639401417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858689630170928,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5187d6ee59db46aeb3871c648064f41c7129c72b4fb12215dcfa9ff690e3dacc,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858688628099955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561
e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858647533318453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9a45366-42ea-48c1-9df7-1c3f5d902d2d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.859445775Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82e7ee6e-6917-45a4-a2d2-0d4cfb15b28c name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.859525171Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82e7ee6e-6917-45a4-a2d2-0d4cfb15b28c name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.860554730Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0a3923f-2483-4661-88b6-8b33fbafea6e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.861014514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859231860989248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0a3923f-2483-4661-88b6-8b33fbafea6e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.861664166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0d58244-8794-4c00-b80c-792cc69dd2b8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.861720155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0d58244-8794-4c00-b80c-792cc69dd2b8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.862253207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60cc74ea619fb947f0b84edcc3d897bff8752b038a8f9b1725bd5384cedcaabd,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858739639401417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858689630170928,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5187d6ee59db46aeb3871c648064f41c7129c72b4fb12215dcfa9ff690e3dacc,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858688628099955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561
e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858647533318453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0d58244-8794-4c00-b80c-792cc69dd2b8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.905228151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9bbeaee-d6d5-436b-a54f-c04cf51729d3 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.905362677Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9bbeaee-d6d5-436b-a54f-c04cf51729d3 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.906222898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=606a1f50-ce8e-4dd9-88a3-82c7892435c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.906731831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859231906681858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=606a1f50-ce8e-4dd9-88a3-82c7892435c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.907389024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c9370fd-946f-454d-a0e9-a07ce4614499 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.907462147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c9370fd-946f-454d-a0e9-a07ce4614499 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:07:11 ha-525790 crio[3621]: time="2024-09-20 19:07:11.912718738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60cc74ea619fb947f0b84edcc3d897bff8752b038a8f9b1725bd5384cedcaabd,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726858739639401417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726858689630170928,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5187d6ee59db46aeb3871c648064f41c7129c72b4fb12215dcfa9ff690e3dacc,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726858688628099955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561
e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726858647533318453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c9370fd-946f-454d-a0e9-a07ce4614499 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60cc74ea619fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       4                   16a2a1305a51f       storage-provisioner
	613c15024e982       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      9 minutes ago       Running             kube-apiserver            3                   64ba18194b8ce       kube-apiserver-ha-525790
	5187d6ee59db4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       3                   16a2a1305a51f       storage-provisioner
	d017a5b283a90       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      9 minutes ago       Running             kube-controller-manager   2                   4014793ae3deb       kube-controller-manager-ha-525790
	667c79074c454       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      9 minutes ago       Running             busybox                   1                   0a6e91416ea52       busybox-7dff88458-z26jr
	22f33ac894101       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      9 minutes ago       Running             kube-vip                  0                   63cc3aec72e5a       kube-vip-ha-525790
	a2c9c9f659f7c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      9 minutes ago       Running             coredns                   1                   f4deab987a6c3       coredns-7c65d6cfc9-nfnkj
	fefbc436d3eff       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      9 minutes ago       Running             kube-proxy                1                   146e6c4948059       kube-proxy-958jz
	c5c19fcb571e8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      9 minutes ago       Running             etcd                      1                   947865a8625cf       etcd-ha-525790
	a1977c4370e57       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      9 minutes ago       Running             coredns                   1                   dddb1e001fdf1       coredns-7c65d6cfc9-rpcds
	6cf18d395747b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      9 minutes ago       Running             kube-scheduler            1                   f3b7300b04471       kube-scheduler-ha-525790
	041c8157b3922       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      9 minutes ago       Running             kindnet-cni               1                   097a4985f63bc       kindnet-9qbm6
	8a18e65180dd2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      9 minutes ago       Exited              kube-apiserver            2                   64ba18194b8ce       kube-apiserver-ha-525790
	231315ec7d013       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      9 minutes ago       Exited              kube-controller-manager   1                   4014793ae3deb       kube-controller-manager-ha-525790
	344b03b51dddb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   17 minutes ago      Exited              busybox                   0                   125671e39b996       busybox-7dff88458-z26jr
	172e8f75d2a84       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Exited              coredns                   0                   5dbd6acffd5c5       coredns-7c65d6cfc9-nfnkj
	3dff404b6ad2a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Exited              coredns                   0                   34517f9f64c86       coredns-7c65d6cfc9-rpcds
	5579930bef0fc       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      19 minutes ago      Exited              kindnet-cni               0                   64136f65f6d34       kindnet-9qbm6
	3d469134674c2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      19 minutes ago      Exited              kube-proxy                0                   2e440a5ac73b7       kube-proxy-958jz
	7d0496391eb85       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      19 minutes ago      Exited              kube-scheduler            0                   fae09dfcf3d6f       kube-scheduler-ha-525790
	bcca29b119984       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Exited              etcd                      0                   17818940c2036       etcd-ha-525790
	
	
	==> coredns [172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1] <==
	[INFO] 10.244.1.2:49534 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164113s
	[INFO] 10.244.2.2:50032 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167479s
	[INFO] 10.244.2.2:33413 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001865571s
	[INFO] 10.244.0.4:38374 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010475s
	[INFO] 10.244.0.4:44676 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170058s
	[INFO] 10.244.0.4:54182 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123082s
	[INFO] 10.244.0.4:52067 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108075s
	[INFO] 10.244.1.2:36885 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133944s
	[INFO] 10.244.2.2:48327 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127372s
	[INFO] 10.244.2.2:52262 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160755s
	[INFO] 10.244.0.4:44171 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111758s
	[INFO] 10.244.1.2:36220 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196033s
	[INFO] 10.244.1.2:33859 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222322s
	[INFO] 10.244.1.2:55349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158431s
	[INFO] 10.244.2.2:37976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138385s
	[INFO] 10.244.2.2:56378 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000191303s
	[INFO] 10.244.2.2:54246 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117607s
	[INFO] 10.244.0.4:53115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116565s
	[INFO] 10.244.0.4:49608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095821s
	[INFO] 10.244.0.4:60862 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111997s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e] <==
	[INFO] 10.244.2.2:42750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000311517s
	[INFO] 10.244.2.2:42748 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319529s
	[INFO] 10.244.2.2:49203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190348s
	[INFO] 10.244.2.2:44849 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019366s
	[INFO] 10.244.2.2:52186 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103082s
	[INFO] 10.244.0.4:58300 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140735s
	[INFO] 10.244.0.4:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001702673s
	[INFO] 10.244.0.4:33721 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001170599s
	[INFO] 10.244.0.4:42180 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061647s
	[INFO] 10.244.1.2:49177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333372s
	[INFO] 10.244.1.2:57192 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147894s
	[INFO] 10.244.1.2:59125 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095482s
	[INFO] 10.244.2.2:50879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019818s
	[INFO] 10.244.2.2:47467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096359s
	[INFO] 10.244.0.4:54464 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087148s
	[INFO] 10.244.0.4:40326 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011895s
	[INFO] 10.244.0.4:46142 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071583s
	[INFO] 10.244.1.2:50168 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000224622s
	[INFO] 10.244.2.2:50611 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117577s
	[INFO] 10.244.0.4:57391 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000320119s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1717&timeout=7m26s&timeoutSeconds=446&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1736&timeout=7m56s&timeoutSeconds=476&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1713&timeout=6m15s&timeoutSeconds=375&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[211372876]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 18:57:36.158) (total time: 10001ms):
	Trace[211372876]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:57:46.159)
	Trace[211372876]: [10.001611713s] [10.001611713s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33172->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:33172->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[159325140]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 18:57:32.397) (total time: 10001ms):
	Trace[159325140]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:57:42.399)
	Trace[159325140]: [10.001577176s] [10.001577176s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:56710->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:56710->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56726->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56726->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-525790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_47_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:47:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:07:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:03:15 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:03:15 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:03:15 +0000   Fri, 20 Sep 2024 18:47:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:03:15 +0000   Fri, 20 Sep 2024 18:47:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-525790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3f2b96a8819496a94e034cf4adf7a85
	  System UUID:                d3f2b96a-8819-496a-94e0-34cf4adf7a85
	  Boot ID:                    02f79ecd-567f-4683-83ce-59afb46feab6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z26jr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-nfnkj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 coredns-7c65d6cfc9-rpcds             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     19m
	  kube-system                 etcd-ha-525790                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-9qbm6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-525790             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-525790    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-958jz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-525790             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-525790                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 19m                  kube-proxy       
	  Normal   Starting                 9m2s                 kube-proxy       
	  Normal   NodeHasSufficientPID     19m                  kubelet          Node ha-525790 status is now: NodeHasSufficientPID
	  Normal   Starting                 19m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  19m                  kubelet          Node ha-525790 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                  kubelet          Node ha-525790 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           19m                  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal   NodeReady                19m                  kubelet          Node ha-525790 status is now: NodeReady
	  Normal   RegisteredNode           18m                  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal   RegisteredNode           17m                  node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal   NodeNotReady             10m (x3 over 10m)    kubelet          Node ha-525790 status is now: NodeNotReady
	  Warning  ContainerGCFailed        9m53s (x2 over 10m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           9m7s                 node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	  Normal   RegisteredNode           8m58s                node-controller  Node ha-525790 event: Registered Node ha-525790 in Controller
	
	
	Name:               ha-525790-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_48_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:48:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:07:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:04:00 +0000   Fri, 20 Sep 2024 18:58:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:04:00 +0000   Fri, 20 Sep 2024 18:58:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:04:00 +0000   Fri, 20 Sep 2024 18:58:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:04:00 +0000   Fri, 20 Sep 2024 18:58:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-525790-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dbde4511fc24bbcb1281f7b7d6ff24f
	  System UUID:                1dbde451-1fc2-4bbc-b128-1f7b7d6ff24f
	  Boot ID:                    d5658712-0cd7-4a8d-96e7-dd80ca41efeb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7jtss                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-525790-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-8glgp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      18m
	  kube-system                 kube-apiserver-ha-525790-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-525790-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-sspfs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-525790-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-525790-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 8m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)      kubelet          Node ha-525790-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)      kubelet          Node ha-525790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)      kubelet          Node ha-525790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                    node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  NodeNotReady             15m                    node-controller  Node ha-525790-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node ha-525790-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node ha-525790-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m28s (x7 over 9m28s)  kubelet          Node ha-525790-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m7s                   node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	  Normal  RegisteredNode           8m58s                  node-controller  Node ha-525790-m02 event: Registered Node ha-525790-m02 in Controller
	
	
	Name:               ha-525790-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-525790-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-525790
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T18_50_26_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:50:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-525790-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:53:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:58:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:58:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:58:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 18:50:57 +0000   Fri, 20 Sep 2024 18:58:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    ha-525790-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c58d814e5e5d49b699d9f977eb54ff58
	  System UUID:                c58d814e-5e5d-49b6-99d9-f977eb54ff58
	  Boot ID:                    69924ac5-b6f2-4ddd-bd0d-fa3c683681d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-df8hf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-proxy-w98cx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x2 over 16m)  kubelet          Node ha-525790-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x2 over 16m)  kubelet          Node ha-525790-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x2 over 16m)  kubelet          Node ha-525790-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  NodeReady                16m                kubelet          Node ha-525790-m04 status is now: NodeReady
	  Normal  RegisteredNode           9m7s               node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  RegisteredNode           8m58s              node-controller  Node ha-525790-m04 event: Registered Node ha-525790-m04 in Controller
	  Normal  NodeNotReady             8m27s              node-controller  Node ha-525790-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep20 18:47] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.053987] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058272] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.180542] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.143015] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.280287] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.923962] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +3.905808] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.064972] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.290695] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.091789] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.472602] kauditd_printk_skb: 36 callbacks suppressed
	[ +11.974718] kauditd_printk_skb: 23 callbacks suppressed
	[Sep20 18:48] kauditd_printk_skb: 24 callbacks suppressed
	[Sep20 18:57] systemd-fstab-generator[3539]: Ignoring "noauto" option for root device
	[  +0.144284] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +0.175904] systemd-fstab-generator[3567]: Ignoring "noauto" option for root device
	[  +0.158366] systemd-fstab-generator[3579]: Ignoring "noauto" option for root device
	[  +0.266903] systemd-fstab-generator[3607]: Ignoring "noauto" option for root device
	[  +1.960904] systemd-fstab-generator[3709]: Ignoring "noauto" option for root device
	[  +6.729271] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.246162] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.067162] kauditd_printk_skb: 1 callbacks suppressed
	[Sep20 18:58] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93] <==
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T18:55:46.260496Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.149:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:55:46.260580Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.149:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:55:46.260677Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ba3e3e863cacc4d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-20T18:55:46.260889Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.260941Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.260980Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261097Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261203Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261373Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261477Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261501Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261581Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261695Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261817Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261878Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261957Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261987Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.264966Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"warn","ts":"2024-09-20T18:55:46.265043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.834260925s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-20T18:55:46.265109Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-09-20T18:55:46.265138Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-525790","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.149:2380"],"advertise-client-urls":["https://192.168.39.149:2379"]}
	
	
	==> etcd [c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1] <==
	{"level":"warn","ts":"2024-09-20T19:06:48.865957Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:51.451602Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:51.451671Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:53.866413Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:53.866484Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:55.453403Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:55.453528Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:58.867162Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:58.867211Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:59.455812Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:06:59.455942Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:07:03.458249Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.105:2380/version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:07:03.458445Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c1c1087b613d98","error":"Get \"https://192.168.39.105:2380/version\": dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:07:03.867718Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T19:07:03.867757Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c1c1087b613d98","rtt":"0s","error":"dial tcp 192.168.39.105:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-20T19:07:06.307761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d switched to configuration voters=(838764542867197005 14475483127073695793)"}
	{"level":"info","ts":"2024-09-20T19:07:06.313082Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","removed-remote-peer-id":"6c1c1087b613d98","removed-remote-peer-urls":["https://192.168.39.105:2380"]}
	{"level":"info","ts":"2024-09-20T19:07:06.313184Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313243Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313346Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313440Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313502Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313533Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313548Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T19:07:06.313562Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"ba3e3e863cacc4d","removed-remote-peer-id":"6c1c1087b613d98"}
	
	
	==> kernel <==
	 19:07:12 up 20 min,  0 users,  load average: 1.22, 0.53, 0.37
	Linux ha-525790 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4] <==
	I0920 19:06:38.905786       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 19:06:38.905836       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:06:38.905865       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:06:48.912129       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 19:06:48.912193       1 main.go:299] handling current node
	I0920 19:06:48.912215       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 19:06:48.912224       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 19:06:48.912465       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 19:06:48.912503       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 19:06:48.912579       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:06:48.912587       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:06:58.914181       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 19:06:58.914363       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 19:06:58.914504       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:06:58.914530       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:06:58.914610       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 19:06:58.914630       1 main.go:299] handling current node
	I0920 19:06:58.914657       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 19:06:58.914674       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 19:07:08.906374       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:07:08.906419       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:07:08.906610       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 19:07:08.906618       1 main.go:299] handling current node
	I0920 19:07:08.906636       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 19:07:08.906640       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98] <==
	I0920 18:55:25.880711       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:55:25.880821       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:55:25.880982       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:55:25.881006       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:55:25.881062       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:55:25.881173       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 18:55:25.881330       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:55:25.881366       1 main.go:299] handling current node
	I0920 18:55:35.880574       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:55:35.880753       1 main.go:299] handling current node
	I0920 18:55:35.880788       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:55:35.880809       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:55:35.880968       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:55:35.881037       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:55:35.881188       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:55:35.881225       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	E0920 18:55:44.415378       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	I0920 18:55:45.880519       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:55:45.880573       1 main.go:299] handling current node
	I0920 18:55:45.880594       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:55:45.880614       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:55:45.880735       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:55:45.880740       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:55:45.880784       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:55:45.880788       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [613c15024e982c2250e97fb1c5a8ab6c46acbc2be83df9e1385a32e31ee31ed6] <==
	I0920 18:58:11.563790       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 18:58:11.566669       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:58:11.647086       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 18:58:11.649520       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 18:58:11.649641       1 policy_source.go:224] refreshing policies
	I0920 18:58:11.663884       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 18:58:11.678760       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 18:58:11.678792       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 18:58:11.679064       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 18:58:11.679104       1 aggregator.go:171] initial CRD sync complete...
	I0920 18:58:11.679119       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 18:58:11.679124       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 18:58:11.679129       1 cache.go:39] Caches are synced for autoregister controller
	I0920 18:58:11.711573       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0920 18:58:11.731002       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I0920 18:58:11.733911       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 18:58:11.740737       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 18:58:11.743338       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 18:58:11.744337       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 18:58:11.750631       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 18:58:11.753464       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 18:58:11.759128       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0920 18:58:11.762522       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0920 18:58:12.550864       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 18:58:13.073998       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149 192.168.39.246]
	
	
	==> kube-apiserver [8a18e65180dd208a362d51b7dedc97d749dc64b3373b275ba6e9776934ebeb40] <==
	I0920 18:57:28.372014       1 options.go:228] external host was not specified, using 192.168.39.149
	I0920 18:57:28.378488       1 server.go:142] Version: v1.31.1
	I0920 18:57:28.378636       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:57:28.839362       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0920 18:57:28.848920       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 18:57:28.849194       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 18:57:28.849607       1 instance.go:232] Using reconciler: lease
	I0920 18:57:28.850178       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0920 18:57:48.834604       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0920 18:57:48.834768       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0920 18:57:48.850633       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829] <==
	I0920 18:57:29.220757       1 serving.go:386] Generated self-signed cert in-memory
	I0920 18:57:29.591669       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0920 18:57:29.591756       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:57:29.593495       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:57:29.593650       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 18:57:29.593717       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0920 18:57:29.593811       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0920 18:57:49.858234       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.149:8443/healthz\": dial tcp 192.168.39.149:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38] <==
	I0920 18:58:49.788903       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:58:50.566798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:58:53.132790       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 18:58:59.474045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:58:59.492472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:58:59.712336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 18:59:00.361226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="74.989µs"
	I0920 18:59:00.746940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m04"
	I0920 18:59:11.733735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.059122ms"
	I0920 18:59:11.735538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.244µs"
	I0920 18:59:29.887985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 19:03:15.447982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790"
	I0920 19:04:00.014795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m02"
	I0920 19:04:36.164186       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 19:07:02.917240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 19:07:02.944139       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	I0920 19:07:03.006373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.585632ms"
	I0920 19:07:03.064470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.042822ms"
	I0920 19:07:03.077715       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.143351ms"
	I0920 19:07:03.078101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.13µs"
	I0920 19:07:05.091941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.932µs"
	I0920 19:07:05.251002       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="134.782µs"
	I0920 19:07:05.255355       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="142.045µs"
	I0920 19:07:06.927218       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-525790-m03"
	E0920 19:07:06.970503       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-525790-m03\", UID:\"4e101ad6-fea3-4e7b-b427-b332dd32130b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32
{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-525790-m03\", UID:\"ffd11555-2f09-4ba2-b423-72f88865bfc4\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-525790-m03\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8] <==
	E0920 18:54:29.445959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:32.517585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:32.517837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:32.517758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:32.518080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:38.533704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:38.533801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:44.681435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:44.681569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:47.750519       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:47.750703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:50.823395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:50.823514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:03.112602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:03.112697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:06.182242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:06.182543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:12.325811       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:12.325963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:30.759875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:30.760470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:46.118426       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:46.118557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:46.118676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:46.118740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 18:57:30.565761       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 18:57:33.640050       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 18:57:36.709752       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 18:57:42.859625       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 18:57:52.070869       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0920 18:58:09.389517       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.149"]
	E0920 18:58:09.389799       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:58:09.436483       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:58:09.436606       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:58:09.436666       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:58:09.441038       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:58:09.441633       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:58:09.442010       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:58:09.446213       1 config.go:199] "Starting service config controller"
	I0920 18:58:09.446402       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:58:09.446500       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:58:09.446578       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:58:09.448018       1 config.go:328] "Starting node config controller"
	I0920 18:58:09.448140       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:58:09.548198       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:58:09.548215       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:58:09.548482       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a] <==
	W0920 18:58:04.819433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.149:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:04.819512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.149:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:05.565781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.149:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:05.565894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.149:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:06.403484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.149:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:06.403602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.149:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:06.898106       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.149:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:06.898186       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.149:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:06.917971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.149:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:06.918035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.149:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:07.845992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.149:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:07.846092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.149:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:08.543222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.149:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:08.543360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.149:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:08.927724       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.149:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:08.927853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.149:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:09.205871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.149:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 18:58:09.205944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.149:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 18:58:11.576509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:58:11.576628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:58:11.576905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:58:11.577007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:58:11.577232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:58:11.577486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 18:58:30.575744       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706] <==
	I0920 18:50:26.263699       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w98cx" node="ha-525790-m04"
	E0920 18:50:26.297985       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298064       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9ff40332-cdad-4e9f-99ca-28d1271713a8(kube-system/kindnet-hwgsh) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-hwgsh"
	E0920 18:50:26.298079       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" pod="kube-system/kindnet-hwgsh"
	I0920 18:50:26.298095       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298461       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	E0920 18:50:26.298512       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 340d5abf-2e79-4cc0-8f1f-130c1e176259(kube-system/kube-proxy-rh89s) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rh89s"
	E0920 18:50:26.298529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" pod="kube-system/kube-proxy-rh89s"
	I0920 18:50:26.298548       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	E0920 18:55:33.838133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0920 18:55:34.010012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:34.163933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:35.197228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:38.126361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0920 18:55:38.323639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0920 18:55:39.518704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0920 18:55:39.859524       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0920 18:55:40.061646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:40.131765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0920 18:55:41.452147       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0920 18:55:43.449377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0920 18:55:44.076439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0920 18:55:44.277830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0920 18:55:45.932626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0920 18:55:46.167365       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 20 19:05:39 ha-525790 kubelet[1305]: E0920 19:05:39.997200    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859139996916490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:05:39 ha-525790 kubelet[1305]: E0920 19:05:39.997304    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859139996916490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:05:49 ha-525790 kubelet[1305]: E0920 19:05:49.999055    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859149998696986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:05:49 ha-525790 kubelet[1305]: E0920 19:05:49.999377    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859149998696986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:00 ha-525790 kubelet[1305]: E0920 19:06:00.001953    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859160001440634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:00 ha-525790 kubelet[1305]: E0920 19:06:00.001978    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859160001440634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:10 ha-525790 kubelet[1305]: E0920 19:06:10.003368    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859170002995065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:10 ha-525790 kubelet[1305]: E0920 19:06:10.003451    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859170002995065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:19 ha-525790 kubelet[1305]: E0920 19:06:19.637558    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 19:06:19 ha-525790 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 19:06:19 ha-525790 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 19:06:19 ha-525790 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 19:06:19 ha-525790 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 19:06:20 ha-525790 kubelet[1305]: E0920 19:06:20.006100    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859180005668944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:20 ha-525790 kubelet[1305]: E0920 19:06:20.006127    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859180005668944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:30 ha-525790 kubelet[1305]: E0920 19:06:30.007559    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859190007239612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:30 ha-525790 kubelet[1305]: E0920 19:06:30.007607    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859190007239612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:40 ha-525790 kubelet[1305]: E0920 19:06:40.010351    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859200009651480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:40 ha-525790 kubelet[1305]: E0920 19:06:40.010404    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859200009651480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:50 ha-525790 kubelet[1305]: E0920 19:06:50.012654    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859210012010968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:06:50 ha-525790 kubelet[1305]: E0920 19:06:50.013017    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859210012010968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:07:00 ha-525790 kubelet[1305]: E0920 19:07:00.015090    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859220014798886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:07:00 ha-525790 kubelet[1305]: E0920 19:07:00.015116    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859220014798886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:07:10 ha-525790 kubelet[1305]: E0920 19:07:10.017892    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859230016933324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:07:10 ha-525790 kubelet[1305]: E0920 19:07:10.017923    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859230016933324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:07:11.483415  771781 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19678-739831/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-525790 -n ha-525790
helpers_test.go:261: (dbg) Run:  kubectl --context ha-525790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-fcds6 etcd-ha-525790-m03 kube-controller-manager-ha-525790-m03 kube-vip-ha-525790-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-525790 describe pod busybox-7dff88458-fcds6 etcd-ha-525790-m03 kube-controller-manager-ha-525790-m03 kube-vip-ha-525790-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-525790 describe pod busybox-7dff88458-fcds6 etcd-ha-525790-m03 kube-controller-manager-ha-525790-m03 kube-vip-ha-525790-m03: exit status 1 (79.919012ms)

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-fcds6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l7bnh (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-l7bnh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age               From               Message
	  ----     ------            ----              ----               -------
	  Warning  FailedScheduling  10s               default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s (x2 over 10s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "etcd-ha-525790-m03" not found
	Error from server (NotFound): pods "kube-controller-manager-ha-525790-m03" not found
	Error from server (NotFound): pods "kube-vip-ha-525790-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-525790 describe pod busybox-7dff88458-fcds6 etcd-ha-525790-m03 kube-controller-manager-ha-525790-m03 kube-vip-ha-525790-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (175.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 stop -v=7 --alsologtostderr
E0920 19:07:47.244621  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:07:58.971858  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-525790 stop -v=7 --alsologtostderr: exit status 82 (2m2.132833465s)

                                                
                                                
-- stdout --
	* Stopping node "ha-525790-m04"  ...
	* Stopping node "ha-525790-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:07:13.529925  771854 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:07:13.530062  771854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:07:13.530072  771854 out.go:358] Setting ErrFile to fd 2...
	I0920 19:07:13.530076  771854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:07:13.530261  771854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 19:07:13.530485  771854 out.go:352] Setting JSON to false
	I0920 19:07:13.530562  771854 mustload.go:65] Loading cluster: ha-525790
	I0920 19:07:13.530985  771854 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:07:13.531073  771854 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 19:07:13.531248  771854 mustload.go:65] Loading cluster: ha-525790
	I0920 19:07:13.531371  771854 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:07:13.531427  771854 stop.go:39] StopHost: ha-525790-m04
	I0920 19:07:13.531785  771854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:07:13.531823  771854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:07:13.547025  771854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39329
	I0920 19:07:13.547612  771854 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:07:13.548177  771854 main.go:141] libmachine: Using API Version  1
	I0920 19:07:13.548200  771854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:07:13.548574  771854 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:07:13.550974  771854 out.go:177] * Stopping node "ha-525790-m04"  ...
	I0920 19:07:13.552228  771854 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 19:07:13.552262  771854 main.go:141] libmachine: (ha-525790-m04) Calling .DriverName
	I0920 19:07:13.552497  771854 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 19:07:13.552526  771854 main.go:141] libmachine: (ha-525790-m04) Calling .GetSSHHostname
	I0920 19:07:13.554178  771854 retry.go:31] will retry after 241.911277ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0920 19:07:13.796414  771854 main.go:141] libmachine: (ha-525790-m04) Calling .GetSSHHostname
	I0920 19:07:13.798019  771854 retry.go:31] will retry after 201.866491ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0920 19:07:14.000425  771854 main.go:141] libmachine: (ha-525790-m04) Calling .GetSSHHostname
	I0920 19:07:14.001951  771854 retry.go:31] will retry after 295.293053ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0920 19:07:14.297397  771854 main.go:141] libmachine: (ha-525790-m04) Calling .GetSSHHostname
	I0920 19:07:14.298876  771854 retry.go:31] will retry after 889.290006ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0920 19:07:15.188909  771854 main.go:141] libmachine: (ha-525790-m04) Calling .GetSSHHostname
	W0920 19:07:15.190514  771854 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0920 19:07:15.190576  771854 main.go:141] libmachine: Stopping "ha-525790-m04"...
	I0920 19:07:15.190589  771854 main.go:141] libmachine: (ha-525790-m04) Calling .GetState
	I0920 19:07:15.191720  771854 stop.go:66] stop err: Machine "ha-525790-m04" is already stopped.
	I0920 19:07:15.191757  771854 stop.go:69] host is already stopped
	I0920 19:07:15.191770  771854 stop.go:39] StopHost: ha-525790-m02
	I0920 19:07:15.192058  771854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:07:15.192098  771854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:07:15.208003  771854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37111
	I0920 19:07:15.208441  771854 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:07:15.208957  771854 main.go:141] libmachine: Using API Version  1
	I0920 19:07:15.208984  771854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:07:15.209346  771854 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:07:15.211290  771854 out.go:177] * Stopping node "ha-525790-m02"  ...
	I0920 19:07:15.212529  771854 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 19:07:15.212563  771854 main.go:141] libmachine: (ha-525790-m02) Calling .DriverName
	I0920 19:07:15.212805  771854 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 19:07:15.212835  771854 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHHostname
	I0920 19:07:15.215589  771854 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 19:07:15.216007  771854 main.go:141] libmachine: (ha-525790-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:aa:a2", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:57:32 +0000 UTC Type:0 Mac:52:54:00:da:aa:a2 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-525790-m02 Clientid:01:52:54:00:da:aa:a2}
	I0920 19:07:15.216049  771854 main.go:141] libmachine: (ha-525790-m02) DBG | domain ha-525790-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:da:aa:a2 in network mk-ha-525790
	I0920 19:07:15.216193  771854 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHPort
	I0920 19:07:15.216320  771854 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHKeyPath
	I0920 19:07:15.216448  771854 main.go:141] libmachine: (ha-525790-m02) Calling .GetSSHUsername
	I0920 19:07:15.216581  771854 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790-m02/id_rsa Username:docker}
	I0920 19:07:15.306520  771854 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 19:07:15.361253  771854 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 19:07:15.416602  771854 main.go:141] libmachine: Stopping "ha-525790-m02"...
	I0920 19:07:15.416627  771854 main.go:141] libmachine: (ha-525790-m02) Calling .GetState
	I0920 19:07:15.418164  771854 main.go:141] libmachine: (ha-525790-m02) Calling .Stop
	I0920 19:07:15.421490  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 0/120
	I0920 19:07:16.422948  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 1/120
	I0920 19:07:17.424199  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 2/120
	I0920 19:07:18.425844  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 3/120
	I0920 19:07:19.427691  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 4/120
	I0920 19:07:20.429495  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 5/120
	I0920 19:07:21.431229  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 6/120
	I0920 19:07:22.432793  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 7/120
	I0920 19:07:23.434497  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 8/120
	I0920 19:07:24.436237  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 9/120
	I0920 19:07:25.438483  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 10/120
	I0920 19:07:26.440484  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 11/120
	I0920 19:07:27.442241  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 12/120
	I0920 19:07:28.443839  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 13/120
	I0920 19:07:29.445398  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 14/120
	I0920 19:07:30.447419  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 15/120
	I0920 19:07:31.449143  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 16/120
	I0920 19:07:32.450523  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 17/120
	I0920 19:07:33.451871  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 18/120
	I0920 19:07:34.453303  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 19/120
	I0920 19:07:35.455143  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 20/120
	I0920 19:07:36.456440  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 21/120
	I0920 19:07:37.458107  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 22/120
	I0920 19:07:38.459831  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 23/120
	I0920 19:07:39.461302  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 24/120
	I0920 19:07:40.463447  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 25/120
	I0920 19:07:41.465512  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 26/120
	I0920 19:07:42.466801  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 27/120
	I0920 19:07:43.468268  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 28/120
	I0920 19:07:44.469861  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 29/120
	I0920 19:07:45.471803  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 30/120
	I0920 19:07:46.473347  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 31/120
	I0920 19:07:47.474947  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 32/120
	I0920 19:07:48.476499  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 33/120
	I0920 19:07:49.477981  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 34/120
	I0920 19:07:50.479397  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 35/120
	I0920 19:07:51.480841  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 36/120
	I0920 19:07:52.482289  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 37/120
	I0920 19:07:53.483691  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 38/120
	I0920 19:07:54.485040  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 39/120
	I0920 19:07:55.486812  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 40/120
	I0920 19:07:56.488373  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 41/120
	I0920 19:07:57.489626  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 42/120
	I0920 19:07:58.491046  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 43/120
	I0920 19:07:59.492299  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 44/120
	I0920 19:08:00.494061  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 45/120
	I0920 19:08:01.495560  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 46/120
	I0920 19:08:02.496857  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 47/120
	I0920 19:08:03.498266  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 48/120
	I0920 19:08:04.499640  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 49/120
	I0920 19:08:05.501339  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 50/120
	I0920 19:08:06.502776  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 51/120
	I0920 19:08:07.504177  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 52/120
	I0920 19:08:08.505570  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 53/120
	I0920 19:08:09.507034  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 54/120
	I0920 19:08:10.508841  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 55/120
	I0920 19:08:11.510836  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 56/120
	I0920 19:08:12.512180  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 57/120
	I0920 19:08:13.513945  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 58/120
	I0920 19:08:14.515283  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 59/120
	I0920 19:08:15.517599  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 60/120
	I0920 19:08:16.519194  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 61/120
	I0920 19:08:17.520685  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 62/120
	I0920 19:08:18.522026  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 63/120
	I0920 19:08:19.523471  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 64/120
	I0920 19:08:20.524947  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 65/120
	I0920 19:08:21.526469  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 66/120
	I0920 19:08:22.527930  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 67/120
	I0920 19:08:23.529370  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 68/120
	I0920 19:08:24.530604  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 69/120
	I0920 19:08:25.532402  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 70/120
	I0920 19:08:26.533728  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 71/120
	I0920 19:08:27.535040  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 72/120
	I0920 19:08:28.537232  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 73/120
	I0920 19:08:29.538485  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 74/120
	I0920 19:08:30.540177  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 75/120
	I0920 19:08:31.541410  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 76/120
	I0920 19:08:32.542740  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 77/120
	I0920 19:08:33.543966  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 78/120
	I0920 19:08:34.545251  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 79/120
	I0920 19:08:35.547120  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 80/120
	I0920 19:08:36.549417  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 81/120
	I0920 19:08:37.550724  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 82/120
	I0920 19:08:38.552506  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 83/120
	I0920 19:08:39.554042  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 84/120
	I0920 19:08:40.555893  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 85/120
	I0920 19:08:41.557467  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 86/120
	I0920 19:08:42.558690  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 87/120
	I0920 19:08:43.560146  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 88/120
	I0920 19:08:44.561655  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 89/120
	I0920 19:08:45.563569  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 90/120
	I0920 19:08:46.565027  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 91/120
	I0920 19:08:47.566272  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 92/120
	I0920 19:08:48.567653  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 93/120
	I0920 19:08:49.569020  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 94/120
	I0920 19:08:50.570795  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 95/120
	I0920 19:08:51.572261  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 96/120
	I0920 19:08:52.573653  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 97/120
	I0920 19:08:53.575048  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 98/120
	I0920 19:08:54.576315  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 99/120
	I0920 19:08:55.577716  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 100/120
	I0920 19:08:56.579115  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 101/120
	I0920 19:08:57.581324  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 102/120
	I0920 19:08:58.582819  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 103/120
	I0920 19:08:59.584192  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 104/120
	I0920 19:09:00.585892  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 105/120
	I0920 19:09:01.587310  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 106/120
	I0920 19:09:02.588562  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 107/120
	I0920 19:09:03.589900  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 108/120
	I0920 19:09:04.591307  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 109/120
	I0920 19:09:05.593142  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 110/120
	I0920 19:09:06.594687  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 111/120
	I0920 19:09:07.596075  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 112/120
	I0920 19:09:08.597491  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 113/120
	I0920 19:09:09.598825  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 114/120
	I0920 19:09:10.600544  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 115/120
	I0920 19:09:11.601840  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 116/120
	I0920 19:09:12.603197  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 117/120
	I0920 19:09:13.604499  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 118/120
	I0920 19:09:14.606069  771854 main.go:141] libmachine: (ha-525790-m02) Waiting for machine to stop 119/120
	I0920 19:09:15.607173  771854 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 19:09:15.607235  771854 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 19:09:15.609219  771854 out.go:201] 
	W0920 19:09:15.610632  771854 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 19:09:15.610655  771854 out.go:270] * 
	* 
	W0920 19:09:15.614581  771854 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 19:09:15.615743  771854 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-525790 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr: (36.305746507s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-525790 -n ha-525790
E0920 19:09:55.904620  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-525790 -n ha-525790: exit status 2 (15.579884822s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-525790 logs -n 25: (1.36467966s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m04 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp testdata/cp-test.txt                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790:/home/docker/cp-test_ha-525790-m04_ha-525790.txt                       |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790 sudo cat                                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790.txt                                 |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m02:/home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m02 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m03:/home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n                                                                 | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | ha-525790-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-525790 ssh -n ha-525790-m03 sudo cat                                          | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC | 20 Sep 24 18:51 UTC |
	|         | /home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-525790 node stop m02 -v=7                                                     | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-525790 node start m02 -v=7                                                    | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-525790 -v=7                                                           | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-525790 -v=7                                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-525790 --wait=true -v=7                                                    | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-525790                                                                | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 19:06 UTC |                     |
	| node    | ha-525790 node delete m03 -v=7                                                   | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 19:07 UTC | 20 Sep 24 19:07 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-525790 stop -v=7                                                              | ha-525790 | jenkins | v1.34.0 | 20 Sep 24 19:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:55:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:55:45.275296  768595 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:55:45.275412  768595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:45.275421  768595 out.go:358] Setting ErrFile to fd 2...
	I0920 18:55:45.275425  768595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:45.275635  768595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:55:45.276210  768595 out.go:352] Setting JSON to false
	I0920 18:55:45.277141  768595 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9495,"bootTime":1726849050,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:55:45.277240  768595 start.go:139] virtualization: kvm guest
	I0920 18:55:45.279445  768595 out.go:177] * [ha-525790] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:55:45.280764  768595 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:55:45.280835  768595 notify.go:220] Checking for updates...
	I0920 18:55:45.283366  768595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:55:45.284696  768595 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:55:45.285940  768595 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:55:45.287169  768595 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:55:45.288409  768595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:55:45.290193  768595 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:45.290315  768595 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:55:45.290797  768595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:55:45.290891  768595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:55:45.306404  768595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37931
	I0920 18:55:45.306820  768595 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:55:45.307492  768595 main.go:141] libmachine: Using API Version  1
	I0920 18:55:45.307521  768595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:55:45.307939  768595 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:55:45.308132  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.343272  768595 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:55:45.344502  768595 start.go:297] selected driver: kvm2
	I0920 18:55:45.344515  768595 start.go:901] validating driver "kvm2" against &{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:55:45.344647  768595 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:55:45.344970  768595 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:55:45.345050  768595 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:55:45.360027  768595 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:55:45.360707  768595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:55:45.360736  768595 cni.go:84] Creating CNI manager for ""
	I0920 18:55:45.360793  768595 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:55:45.360859  768595 start.go:340] cluster config:
	{Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:55:45.361009  768595 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:55:45.363552  768595 out.go:177] * Starting "ha-525790" primary control-plane node in "ha-525790" cluster
	I0920 18:55:45.364920  768595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:55:45.364979  768595 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:55:45.364990  768595 cache.go:56] Caching tarball of preloaded images
	I0920 18:55:45.365061  768595 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:55:45.365070  768595 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:55:45.365198  768595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/config.json ...
	I0920 18:55:45.365394  768595 start.go:360] acquireMachinesLock for ha-525790: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 18:55:45.365441  768595 start.go:364] duration metric: took 28.871µs to acquireMachinesLock for "ha-525790"
	I0920 18:55:45.365453  768595 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:55:45.365460  768595 fix.go:54] fixHost starting: 
	I0920 18:55:45.365716  768595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:55:45.365748  768595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:55:45.379754  768595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0920 18:55:45.380277  768595 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:55:45.380763  768595 main.go:141] libmachine: Using API Version  1
	I0920 18:55:45.380778  768595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:55:45.381096  768595 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:55:45.381300  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.381472  768595 main.go:141] libmachine: (ha-525790) Calling .GetState
	I0920 18:55:45.382944  768595 fix.go:112] recreateIfNeeded on ha-525790: state=Running err=<nil>
	W0920 18:55:45.382979  768595 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:55:45.384708  768595 out.go:177] * Updating the running kvm2 "ha-525790" VM ...
	I0920 18:55:45.385966  768595 machine.go:93] provisionDockerMachine start ...
	I0920 18:55:45.385981  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:55:45.386173  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.388503  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.388933  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.388960  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.389104  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.389273  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.389402  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.389518  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.389711  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.389908  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.389919  768595 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:55:45.492072  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:55:45.492099  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.492366  768595 buildroot.go:166] provisioning hostname "ha-525790"
	I0920 18:55:45.492393  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.492559  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.495258  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.495689  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.495715  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.495923  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.496094  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.496279  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.496427  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.496584  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.496775  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.496788  768595 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-525790 && echo "ha-525790" | sudo tee /etc/hostname
	I0920 18:55:45.611170  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-525790
	
	I0920 18:55:45.611203  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.613965  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.614392  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.614418  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.614605  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.614780  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.614979  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.615163  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.615334  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:45.615507  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:45.615522  768595 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-525790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-525790/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-525790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:55:45.716203  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:55:45.716236  768595 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 18:55:45.716258  768595 buildroot.go:174] setting up certificates
	I0920 18:55:45.716266  768595 provision.go:84] configureAuth start
	I0920 18:55:45.716287  768595 main.go:141] libmachine: (ha-525790) Calling .GetMachineName
	I0920 18:55:45.716546  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:55:45.719410  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.719789  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.719816  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.720053  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.722137  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.722463  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.722483  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.722613  768595 provision.go:143] copyHostCerts
	I0920 18:55:45.722648  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:55:45.722687  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 18:55:45.722704  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 18:55:45.722767  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 18:55:45.722893  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:55:45.722922  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 18:55:45.722929  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 18:55:45.722959  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 18:55:45.723019  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:55:45.723040  768595 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 18:55:45.723046  768595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 18:55:45.723071  768595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 18:55:45.723132  768595 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.ha-525790 san=[127.0.0.1 192.168.39.149 ha-525790 localhost minikube]
	I0920 18:55:45.874751  768595 provision.go:177] copyRemoteCerts
	I0920 18:55:45.874835  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:55:45.874884  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:45.877528  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.877971  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:45.878002  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:45.878210  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:45.878387  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:45.878591  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:45.878724  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:55:45.960427  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 18:55:45.960518  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 18:55:45.994757  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 18:55:45.994865  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:55:46.024642  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 18:55:46.024718  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:55:46.055496  768595 provision.go:87] duration metric: took 339.216483ms to configureAuth
	I0920 18:55:46.055535  768595 buildroot.go:189] setting minikube options for container-runtime
	I0920 18:55:46.055829  768595 config.go:182] Loaded profile config "ha-525790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:55:46.055929  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:55:46.058831  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:46.059288  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:55:46.059324  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:55:46.059533  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:55:46.059716  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:46.059891  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:55:46.060010  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:55:46.060167  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:55:46.060375  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:55:46.060391  768595 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:57:16.901155  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:57:16.901195  768595 machine.go:96] duration metric: took 1m31.515216231s to provisionDockerMachine
	I0920 18:57:16.901213  768595 start.go:293] postStartSetup for "ha-525790" (driver="kvm2")
	I0920 18:57:16.901229  768595 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:57:16.901256  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:16.901619  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:57:16.901655  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:16.904582  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:16.905033  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:16.905077  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:16.905237  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:16.905435  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:16.905596  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:16.905768  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:16.986592  768595 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:57:16.990860  768595 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 18:57:16.990889  768595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 18:57:16.990948  768595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 18:57:16.991031  768595 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 18:57:16.991042  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 18:57:16.991128  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:57:17.000970  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:57:17.025421  768595 start.go:296] duration metric: took 124.189503ms for postStartSetup
	I0920 18:57:17.025508  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.025853  768595 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0920 18:57:17.025891  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.028640  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.029043  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.029071  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.029274  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.029491  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.029672  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.029818  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	W0920 18:57:17.109879  768595 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0920 18:57:17.109911  768595 fix.go:56] duration metric: took 1m31.744451562s for fixHost
	I0920 18:57:17.109970  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.112933  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.113331  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.113363  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.113469  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.113648  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.113876  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.114026  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.114184  768595 main.go:141] libmachine: Using SSH client type: native
	I0920 18:57:17.114401  768595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0920 18:57:17.114415  768595 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 18:57:17.216062  768595 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726858637.181333539
	
	I0920 18:57:17.216090  768595 fix.go:216] guest clock: 1726858637.181333539
	I0920 18:57:17.216101  768595 fix.go:229] Guest: 2024-09-20 18:57:17.181333539 +0000 UTC Remote: 2024-09-20 18:57:17.109918074 +0000 UTC m=+91.872102399 (delta=71.415465ms)
	I0920 18:57:17.216125  768595 fix.go:200] guest clock delta is within tolerance: 71.415465ms
	I0920 18:57:17.216130  768595 start.go:83] releasing machines lock for "ha-525790", held for 1m31.850683513s
	I0920 18:57:17.216152  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.216461  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:57:17.219017  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.219376  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.219412  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.219494  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220012  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220193  768595 main.go:141] libmachine: (ha-525790) Calling .DriverName
	I0920 18:57:17.220325  768595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:57:17.220390  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.220399  768595 ssh_runner.go:195] Run: cat /version.json
	I0920 18:57:17.220418  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHHostname
	I0920 18:57:17.222866  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223251  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223284  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.223301  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223449  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.223621  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.223790  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:17.223811  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:17.223813  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.223960  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHPort
	I0920 18:57:17.223963  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:17.224104  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHKeyPath
	I0920 18:57:17.224245  768595 main.go:141] libmachine: (ha-525790) Calling .GetSSHUsername
	I0920 18:57:17.224417  768595 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/ha-525790/id_rsa Username:docker}
	I0920 18:57:17.296110  768595 ssh_runner.go:195] Run: systemctl --version
	I0920 18:57:17.321175  768595 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:57:17.477104  768595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 18:57:17.485831  768595 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 18:57:17.485914  768595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:57:17.495337  768595 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:57:17.495360  768595 start.go:495] detecting cgroup driver to use...
	I0920 18:57:17.495424  768595 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:57:17.511930  768595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:57:17.525328  768595 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:57:17.525387  768595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:57:17.538722  768595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:57:17.552122  768595 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:57:17.698681  768595 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:57:17.845821  768595 docker.go:233] disabling docker service ...
	I0920 18:57:17.845899  768595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:57:17.863738  768595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:57:17.877401  768595 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:57:18.024631  768595 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:57:18.172584  768595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:57:18.186842  768595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:57:18.205846  768595 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:57:18.205925  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.216288  768595 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:57:18.216358  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.226555  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.237201  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.247630  768595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:57:18.257984  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.267924  768595 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.278978  768595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:57:18.288891  768595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:57:18.297865  768595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:57:18.306911  768595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:57:18.446180  768595 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:57:19.895749  768595 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.449526733s)
	I0920 18:57:19.895791  768595 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:57:19.895837  768595 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:57:19.904678  768595 start.go:563] Will wait 60s for crictl version
	I0920 18:57:19.904743  768595 ssh_runner.go:195] Run: which crictl
	I0920 18:57:19.908608  768595 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:57:19.945193  768595 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 18:57:19.945279  768595 ssh_runner.go:195] Run: crio --version
	I0920 18:57:19.974543  768595 ssh_runner.go:195] Run: crio --version
	I0920 18:57:20.007822  768595 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 18:57:20.009139  768595 main.go:141] libmachine: (ha-525790) Calling .GetIP
	I0920 18:57:20.011764  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:20.012169  768595 main.go:141] libmachine: (ha-525790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:48:3a", ip: ""} in network mk-ha-525790: {Iface:virbr1 ExpiryTime:2024-09-20 19:46:53 +0000 UTC Type:0 Mac:52:54:00:93:48:3a Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-525790 Clientid:01:52:54:00:93:48:3a}
	I0920 18:57:20.012198  768595 main.go:141] libmachine: (ha-525790) DBG | domain ha-525790 has defined IP address 192.168.39.149 and MAC address 52:54:00:93:48:3a in network mk-ha-525790
	I0920 18:57:20.012388  768595 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 18:57:20.017342  768595 kubeadm.go:883] updating cluster {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:57:20.017482  768595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:57:20.017559  768595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:57:20.062678  768595 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:57:20.062704  768595 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:57:20.062757  768595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:57:20.098285  768595 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:57:20.098310  768595 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:57:20.098320  768595 kubeadm.go:934] updating node { 192.168.39.149 8443 v1.31.1 crio true true} ...
	I0920 18:57:20.098422  768595 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-525790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:57:20.098485  768595 ssh_runner.go:195] Run: crio config
	I0920 18:57:20.146689  768595 cni.go:84] Creating CNI manager for ""
	I0920 18:57:20.146719  768595 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 18:57:20.146731  768595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:57:20.146762  768595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-525790 NodeName:ha-525790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:57:20.146949  768595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-525790"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:57:20.146969  768595 kube-vip.go:115] generating kube-vip config ...
	I0920 18:57:20.147010  768595 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 18:57:20.158523  768595 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 18:57:20.158643  768595 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 18:57:20.158707  768595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:57:20.168660  768595 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:57:20.168733  768595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 18:57:20.178461  768595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 18:57:20.198566  768595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:57:20.217954  768595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 18:57:20.237499  768595 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 18:57:20.258010  768595 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 18:57:20.262485  768595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:57:20.407038  768595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:57:20.422336  768595 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790 for IP: 192.168.39.149
	I0920 18:57:20.422365  768595 certs.go:194] generating shared ca certs ...
	I0920 18:57:20.422387  768595 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.422549  768595 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 18:57:20.422595  768595 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 18:57:20.422607  768595 certs.go:256] generating profile certs ...
	I0920 18:57:20.422714  768595 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/client.key
	I0920 18:57:20.422742  768595 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0
	I0920 18:57:20.422758  768595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149 192.168.39.246 192.168.39.105 192.168.39.254]
	I0920 18:57:20.498103  768595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 ...
	I0920 18:57:20.498146  768595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0: {Name:mkf1c7de4d51cd00dcbb302f98eb38a12aeaa743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.498349  768595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0 ...
	I0920 18:57:20.498366  768595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0: {Name:mkd16bd720a2c366eb4c3af52495872448237117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:57:20.498439  768595 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt.5e4b97f0 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt
	I0920 18:57:20.498595  768595 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key.5e4b97f0 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key
	I0920 18:57:20.498727  768595 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key
	I0920 18:57:20.498744  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 18:57:20.498757  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 18:57:20.498773  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 18:57:20.498786  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 18:57:20.498798  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 18:57:20.498815  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 18:57:20.498828  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 18:57:20.498839  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 18:57:20.498902  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 18:57:20.498929  768595 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 18:57:20.498939  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:57:20.498966  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:57:20.498987  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:57:20.499009  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 18:57:20.499046  768595 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 18:57:20.499073  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.499086  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.499098  768595 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.499673  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:57:20.526194  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:57:20.550080  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:57:20.573814  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:57:20.597383  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 18:57:20.621333  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:57:20.644650  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:57:20.669077  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/ha-525790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:57:20.692742  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 18:57:20.716696  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 18:57:20.740168  768595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:57:20.763494  768595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:57:20.779948  768595 ssh_runner.go:195] Run: openssl version
	I0920 18:57:20.785654  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 18:57:20.796055  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.800308  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.800350  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 18:57:20.805711  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:57:20.814658  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:57:20.825022  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.829283  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.829328  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:57:20.835197  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:57:20.844367  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 18:57:20.858330  768595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.862698  768595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.862756  768595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 18:57:20.868290  768595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 18:57:20.877322  768595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:57:20.881726  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:57:20.887174  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:57:20.892568  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:57:20.897933  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:57:20.903504  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:57:20.908964  768595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:57:20.914297  768595 kubeadm.go:392] StartCluster: {Name:ha-525790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-525790 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.105 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.181 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:57:20.914419  768595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:57:20.914479  768595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:57:20.953634  768595 cri.go:89] found id: "25771bcd68395f46a10f7a984281c99bb335a8ca69efb4245fa13e739f74e880"
	I0920 18:57:20.953663  768595 cri.go:89] found id: "05474c6dd3411b2d54bcdb9c489372dbdd009e7696128a025d961ffa61cea90e"
	I0920 18:57:20.953670  768595 cri.go:89] found id: "fdef47cd693637030df15d12b4203fda70a684a6ba84cf20353b69d3f9314810"
	I0920 18:57:20.953675  768595 cri.go:89] found id: "57fdde7a007ff9a10cfbb40f67eb3fd2036aeb4918ebe808fdb7ab94429b6f90"
	I0920 18:57:20.953679  768595 cri.go:89] found id: "172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1"
	I0920 18:57:20.953684  768595 cri.go:89] found id: "3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e"
	I0920 18:57:20.953688  768595 cri.go:89] found id: "5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98"
	I0920 18:57:20.953692  768595 cri.go:89] found id: "3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8"
	I0920 18:57:20.953696  768595 cri.go:89] found id: "c704a3be19bcb0cfb653cb3bdad4548ff16ab59fc886290b6b1ed57874b379cc"
	I0920 18:57:20.953705  768595 cri.go:89] found id: "7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706"
	I0920 18:57:20.953709  768595 cri.go:89] found id: "1196adfd1199669f289106197057a6a027e2a97c97830961d5c102c7143d67bb"
	I0920 18:57:20.953727  768595 cri.go:89] found id: "bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93"
	I0920 18:57:20.953734  768595 cri.go:89] found id: "49582cb9e07244a19c1edf1accdab94a1702d3cccc9e120b67b8c49f7629db72"
	I0920 18:57:20.953738  768595 cri.go:89] found id: ""
	I0920 18:57:20.953792  768595 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.861671005Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9f86ecb-a5c5-49a2-a335-08c30f35a292 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.862643927Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a802c3c-5531-4715-a832-9dc2baa032a6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.863311090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859407863241095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a802c3c-5531-4715-a832-9dc2baa032a6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.863815813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=131c54f3-9454-49f5-a562-cd1822ae186c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.863875996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=131c54f3-9454-49f5-a562-cd1822ae186c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.864228827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba3571b7729068fde6139487b328bb566b050fb1d9fcd4cd1ed13963c333c6af,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859340960913053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cc5ea6e0ba5d41fc6dcbf5c17e4a2a946a42ecf180623a73a5cde5c439f527,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726859326628946667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contai
nerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159d
cc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc
76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b
1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-cc
ab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{
io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad9415
75eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e
661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=131c54f3-9454-49f5-a562-cd1822ae186c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.888044117Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=fdd5de1a-28ab-4c9c-b846-5931fa646dee name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.889027730Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-z26jr,Uid:3a3cda3d-ccab-4483-98e6-50d779cc3354,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858680848534710,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:49:50.378606577Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-525790,Uid:250bdcc9f914b29a36cef0bb52cd1ac5,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726858662865131258,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{kubernetes.io/config.hash: 250bdcc9f914b29a36cef0bb52cd1ac5,kubernetes.io/config.seen: 2024-09-20T18:57:20.222918724Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nfnkj,Uid:7994989d-6bfa-4d25-b7b7-662d2e6c742c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647153225091,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-20T18:47:36.440226200Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&PodSandboxMetadata{Name:kube-proxy-958jz,Uid:46603403-eb82-4f15-a1da-da62194a072f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647110804577,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:23.840921604Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rpcds,Uid:7db58219-7147-4a45-b233-ef3c698566ef,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647089120616
,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:36.433422835Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-525790,Uid:b5b17991bc76439c3c561e1834ba5b98,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647068053851,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b5b17991bc76439c3c561e1834ba5b98,kubernetes.io/config
.seen: 2024-09-20T18:47:19.594862954Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-525790,Uid:09c07a212745d10d359109606d1f8e5a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647066139608,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.149:8443,kubernetes.io/config.hash: 09c07a212745d10d359109606d1f8e5a,kubernetes.io/config.seen: 2024-09-20T18:47:19.594859927Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&PodSandboxMetadata{Name:etcd-ha-525790,Uid
:a2b3e6b5917d1f11b27828fbc85076e4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647053085002,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.149:2379,kubernetes.io/config.hash: a2b3e6b5917d1f11b27828fbc85076e4,kubernetes.io/config.seen: 2024-09-20T18:47:19.594856708Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&PodSandboxMetadata{Name:kindnet-9qbm6,Uid:87e8ae18-a561-48ec-9835-27446b6917d3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647030949753,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet
-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:23.865527140Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-525790,Uid:fa36b1aee3057cc6a6644c2a2b2b9582,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858647026135836,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fa36b1aee3057cc6a6644c2a2b2b9582,kubernetes.io/config.seen: 2024-09-20T18:47:19.594861884Z,kubernetes.io/config.source: f
ile,},RuntimeHandler:,},&PodSandbox{Id:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ea6bf34f-c1f7-4216-a61f-be30846c991b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726858646986059431,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imag
ePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T18:47:36.445299882Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-z26jr,Uid:3a3cda3d-ccab-4483-98e6-50d779cc3354,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858190692240668,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:49:50.378606577Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nfnkj,Uid:7994989d-6bfa-4d25-b7b7-662d2e6c742c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858056748003547,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:36.440226200Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rpcds,Uid:7db58219-7147-4a45-b233-ef3c698566ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858056743924756,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:36.433422835Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&PodSandboxMetadata{Name:kindnet-9qbm6,Uid:87e8ae18-a561-48ec-9835-27446b6917d3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858044173674425,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:23.865527140Z,kubernetes.io/config.source: api,},Runt
imeHandler:,},&PodSandbox{Id:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&PodSandboxMetadata{Name:kube-proxy-958jz,Uid:46603403-eb82-4f15-a1da-da62194a072f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858044156236050,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T18:47:23.840921604Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-525790,Uid:b5b17991bc76439c3c561e1834ba5b98,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858032856363299,Labels:map[string]string{component: kube-scheduler,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b5b17991bc76439c3c561e1834ba5b98,kubernetes.io/config.seen: 2024-09-20T18:47:12.380324762Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&PodSandboxMetadata{Name:etcd-ha-525790,Uid:a2b3e6b5917d1f11b27828fbc85076e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726858032825529828,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.149:2379,kubernetes.io/config.hash: a2b3e6b5
917d1f11b27828fbc85076e4,kubernetes.io/config.seen: 2024-09-20T18:47:12.380318617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fdd5de1a-28ab-4c9c-b846-5931fa646dee name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.889775616Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11b46e35-f41f-4cb6-9d18-7b7025731594 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.889847211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11b46e35-f41f-4cb6-9d18-7b7025731594 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.890193573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba3571b7729068fde6139487b328bb566b050fb1d9fcd4cd1ed13963c333c6af,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859340960913053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cc5ea6e0ba5d41fc6dcbf5c17e4a2a946a42ecf180623a73a5cde5c439f527,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726859326628946667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contai
nerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159d
cc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc
76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b
1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-cc
ab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{
io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad9415
75eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e
661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11b46e35-f41f-4cb6-9d18-7b7025731594 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.908008105Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=865620df-9b69-4c24-8b90-23b478a20e6e name=/runtime.v1.RuntimeService/Version
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.908089781Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=865620df-9b69-4c24-8b90-23b478a20e6e name=/runtime.v1.RuntimeService/Version
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.910164588Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d0035cb-d957-4b75-880a-d95f72a3a915 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.910795351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859407910770913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d0035cb-d957-4b75-880a-d95f72a3a915 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.911504066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d83e2595-b4bf-47b8-9bcd-6e9647e706ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.911571463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d83e2595-b4bf-47b8-9bcd-6e9647e706ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.911914757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba3571b7729068fde6139487b328bb566b050fb1d9fcd4cd1ed13963c333c6af,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859340960913053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cc5ea6e0ba5d41fc6dcbf5c17e4a2a946a42ecf180623a73a5cde5c439f527,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726859326628946667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contai
nerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159d
cc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc
76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b
1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-cc
ab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{
io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad9415
75eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e
661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d83e2595-b4bf-47b8-9bcd-6e9647e706ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.956848976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07453eaf-34d1-4a94-8bc8-f82ae2a06e04 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.956943425Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07453eaf-34d1-4a94-8bc8-f82ae2a06e04 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.958561046Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae41c347-376c-468f-a9ed-ddb437aa2b6f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.958998355Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859407958963713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae41c347-376c-468f-a9ed-ddb437aa2b6f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.959544561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc07f1df-31e9-4154-980b-18f06b249c2f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.959619483Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc07f1df-31e9-4154-980b-18f06b249c2f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:10:07 ha-525790 crio[3621]: time="2024-09-20 19:10:07.959980798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba3571b7729068fde6139487b328bb566b050fb1d9fcd4cd1ed13963c333c6af,PodSandboxId:64ba18194b8cee6a5dc945c8a860e30df4889b5327784d28ca84fa101d3ee3af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726859340960913053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c07a212745d10d359109606d1f8e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cc5ea6e0ba5d41fc6dcbf5c17e4a2a946a42ecf180623a73a5cde5c439f527,PodSandboxId:16a2a1305a51f60d21ddbbb6bb09b78cb9369c87e00a2d48189b0a13c1dff725,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726859326628946667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6bf34f-c1f7-4216-a61f-be30846c991b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726858686629742984,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667c79074c454aa20ce82977f878cfe4a37c6f5ea0695c815cbba15549f3a45f,PodSandboxId:0a6e91416ea52f22ed259013e4a0a21bf72498fd8c931f248c2115876ae94d63,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726858681003653008,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-ccab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f33ac8941017429ef2f8b90f5da558d02aee1e4f28f943f00cbb9948c09384,PodSandboxId:63cc3aec72e5ac7247b8edf4ee58731ed8aeec1987054757f74a79629da16d78,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726858662970382907,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250bdcc9f914b29a36cef0bb52cd1ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4,PodSandboxId:f4deab987a6c34e26ea3f805609a830b7822a6602102d1f413f9ce3e6884ef4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647995000091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contai
nerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3,PodSandboxId:146e6c4948059463392671cd62b7ec8b5a29dc21e9ab6ebe87a3f7c25839e916,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726858647885688947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159d
cc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4,PodSandboxId:097a4985f63bc254a2bdce47ba0c22b1b5b624fcf1c6cf3c7eb8ea0012d8d427,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726858647587461354,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1,PodSandboxId:947865a8625cf4fd5130e55bfe16430a11573647782a77e8f42d10abe79271e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726858647717412599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9,PodSandboxId:dddb1e001fdf1c2b4148f34e191168adccea81cff10b5b3e18b1e18c43e20229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726858647711164608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a,PodSandboxId:f3b7300b04471e8ff85210139d0aaf4a37484fda119ba2d5c88427f9d6e07acb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726858647658691652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc
76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829,PodSandboxId:4014793ae3deb9408681a17f32d001e095d956a2f5db2587611f500cf115c760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726858647426978213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa36b
1aee3057cc6a6644c2a2b2b9582,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:344b03b51dddb76a43e34eff505cae1c7f2fc0c407fd4b1907c7b90ca3f1740d,PodSandboxId:125671e39b996de9173fb6e5754c1f720d8fb94a9ec3ba2648c552be46f185a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726858192106346190,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z26jr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a3cda3d-cc
ab-4483-98e6-50d779cc3354,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e,PodSandboxId:34517f9f64c86c31f36791ed8b2e821e014fa38c4d5f061335b36a47c6fb0c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056980796182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rpcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7db58219-7147-4a45-b233-ef3c698566ef,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1,PodSandboxId:5dbd6acffd5c5f1fc1ed411875b96b65fe2c1d675a5483c7f2f18993217ee740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726858056983757613,Labels:map[string]string{
io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nfnkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994989d-6bfa-4d25-b7b7-662d2e6c742c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98,PodSandboxId:64136f65f6d3496d81a3df4a22e9b49752cfb4ea3330d15b5cac078fc20e6274,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726858044669171219,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9qbm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87e8ae18-a561-48ec-9835-27446b6917d3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8,PodSandboxId:2e440a5ac73b703742ea75b78ed40f3e7cda3c4c09086f4d4a5983a6258103f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726858044313148897,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-958jz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46603403-eb82-4f15-a1da-da62194a072f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706,PodSandboxId:fae09dfcf3d6ff9fba515597c53b3d06da69a9ba522f10c5ca018ed7e0de0c4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad9415
75eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726858033124074067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b17991bc76439c3c561e1834ba5b98,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93,PodSandboxId:17818940c2036132b13c4f542ffd866953605e459f8a79eabaafe4b29fc8179a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e
661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726858033076556541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-525790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b3e6b5917d1f11b27828fbc85076e4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc07f1df-31e9-4154-980b-18f06b249c2f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ba3571b772906       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Exited              kube-apiserver            4                   64ba18194b8ce       kube-apiserver-ha-525790
	68cc5ea6e0ba5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       5                   16a2a1305a51f       storage-provisioner
	d017a5b283a90       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      12 minutes ago       Running             kube-controller-manager   2                   4014793ae3deb       kube-controller-manager-ha-525790
	667c79074c454       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      12 minutes ago       Running             busybox                   1                   0a6e91416ea52       busybox-7dff88458-z26jr
	22f33ac894101       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      12 minutes ago       Running             kube-vip                  0                   63cc3aec72e5a       kube-vip-ha-525790
	a2c9c9f659f7c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Running             coredns                   1                   f4deab987a6c3       coredns-7c65d6cfc9-nfnkj
	fefbc436d3eff       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      12 minutes ago       Running             kube-proxy                1                   146e6c4948059       kube-proxy-958jz
	c5c19fcb571e8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      12 minutes ago       Running             etcd                      1                   947865a8625cf       etcd-ha-525790
	a1977c4370e57       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Running             coredns                   1                   dddb1e001fdf1       coredns-7c65d6cfc9-rpcds
	6cf18d395747b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      12 minutes ago       Running             kube-scheduler            1                   f3b7300b04471       kube-scheduler-ha-525790
	041c8157b3922       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      12 minutes ago       Running             kindnet-cni               1                   097a4985f63bc       kindnet-9qbm6
	231315ec7d013       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      12 minutes ago       Exited              kube-controller-manager   1                   4014793ae3deb       kube-controller-manager-ha-525790
	344b03b51dddb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   20 minutes ago       Exited              busybox                   0                   125671e39b996       busybox-7dff88458-z26jr
	172e8f75d2a84       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      22 minutes ago       Exited              coredns                   0                   5dbd6acffd5c5       coredns-7c65d6cfc9-nfnkj
	3dff404b6ad2a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      22 minutes ago       Exited              coredns                   0                   34517f9f64c86       coredns-7c65d6cfc9-rpcds
	5579930bef0fc       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      22 minutes ago       Exited              kindnet-cni               0                   64136f65f6d34       kindnet-9qbm6
	3d469134674c2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      22 minutes ago       Exited              kube-proxy                0                   2e440a5ac73b7       kube-proxy-958jz
	7d0496391eb85       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      22 minutes ago       Exited              kube-scheduler            0                   fae09dfcf3d6f       kube-scheduler-ha-525790
	bcca29b119984       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      22 minutes ago       Exited              etcd                      0                   17818940c2036       etcd-ha-525790
	
	
	==> coredns [172e8f75d2a84c11a2d683774e4e79823ed16a14c82688c996002b29dccacbf1] <==
	[INFO] 10.244.1.2:49534 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164113s
	[INFO] 10.244.2.2:50032 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167479s
	[INFO] 10.244.2.2:33413 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001865571s
	[INFO] 10.244.0.4:38374 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010475s
	[INFO] 10.244.0.4:44676 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000170058s
	[INFO] 10.244.0.4:54182 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123082s
	[INFO] 10.244.0.4:52067 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108075s
	[INFO] 10.244.1.2:36885 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133944s
	[INFO] 10.244.2.2:48327 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127372s
	[INFO] 10.244.2.2:52262 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160755s
	[INFO] 10.244.0.4:44171 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111758s
	[INFO] 10.244.1.2:36220 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196033s
	[INFO] 10.244.1.2:33859 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222322s
	[INFO] 10.244.1.2:55349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000158431s
	[INFO] 10.244.2.2:37976 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138385s
	[INFO] 10.244.2.2:56378 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000191303s
	[INFO] 10.244.2.2:54246 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117607s
	[INFO] 10.244.0.4:53115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116565s
	[INFO] 10.244.0.4:49608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095821s
	[INFO] 10.244.0.4:60862 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111997s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3dff404b6ad2ac9549d65623b603c98c46e365edebdddc3fa43f5ef547051d3e] <==
	[INFO] 10.244.2.2:42750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000311517s
	[INFO] 10.244.2.2:42748 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319529s
	[INFO] 10.244.2.2:49203 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190348s
	[INFO] 10.244.2.2:44849 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019366s
	[INFO] 10.244.2.2:52186 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103082s
	[INFO] 10.244.0.4:58300 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140735s
	[INFO] 10.244.0.4:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001702673s
	[INFO] 10.244.0.4:33721 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001170599s
	[INFO] 10.244.0.4:42180 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061647s
	[INFO] 10.244.1.2:49177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333372s
	[INFO] 10.244.1.2:57192 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147894s
	[INFO] 10.244.1.2:59125 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095482s
	[INFO] 10.244.2.2:50879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019818s
	[INFO] 10.244.2.2:47467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096359s
	[INFO] 10.244.0.4:54464 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087148s
	[INFO] 10.244.0.4:40326 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011895s
	[INFO] 10.244.0.4:46142 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071583s
	[INFO] 10.244.1.2:50168 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000224622s
	[INFO] 10.244.2.2:50611 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117577s
	[INFO] 10.244.0.4:57391 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000320119s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1717&timeout=7m26s&timeoutSeconds=446&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1736&timeout=7m56s&timeoutSeconds=476&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1713&timeout=6m15s&timeoutSeconds=375&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a1977c4370e572f9d46a89fdac2f5bf124e2fe7a35d95b9c4d329384e57264a9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3240": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3240": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: Trace[1529992728]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 19:09:45.122) (total time: 11741ms):
	Trace[1529992728]: ---"Objects listed" error:unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug="" 11741ms (19:09:56.863)
	Trace[1529992728]: [11.741136269s] [11.741136269s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [a2c9c9f659f7c8eea8960ff4d57a4049347beaf567d58cc6dd20e62ae35179d4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:56710->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:56710->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56726->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56726->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3222": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3222": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3207": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3207": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep20 18:47] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.053987] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058272] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.180542] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.143015] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.280287] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.923962] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +3.905808] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.064972] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.290695] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.091789] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.472602] kauditd_printk_skb: 36 callbacks suppressed
	[ +11.974718] kauditd_printk_skb: 23 callbacks suppressed
	[Sep20 18:48] kauditd_printk_skb: 24 callbacks suppressed
	[Sep20 18:57] systemd-fstab-generator[3539]: Ignoring "noauto" option for root device
	[  +0.144284] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +0.175904] systemd-fstab-generator[3567]: Ignoring "noauto" option for root device
	[  +0.158366] systemd-fstab-generator[3579]: Ignoring "noauto" option for root device
	[  +0.266903] systemd-fstab-generator[3607]: Ignoring "noauto" option for root device
	[  +1.960904] systemd-fstab-generator[3709]: Ignoring "noauto" option for root device
	[  +6.729271] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.246162] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.067162] kauditd_printk_skb: 1 callbacks suppressed
	[Sep20 18:58] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [bcca29b1199844f517f0f2c2e1cd7f5e7913ed52e85ee8540fe97b5e29133a93] <==
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 18:55:46 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T18:55:46.260496Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.149:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T18:55:46.260580Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.149:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T18:55:46.260677Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ba3e3e863cacc4d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-20T18:55:46.260889Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.260941Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.260980Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261097Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261203Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261373Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261477Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c8e33f35ad636831"}
	{"level":"info","ts":"2024-09-20T18:55:46.261501Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261581Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261695Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261817Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261878Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261957Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ba3e3e863cacc4d","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.261987Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6c1c1087b613d98"}
	{"level":"info","ts":"2024-09-20T18:55:46.264966Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"warn","ts":"2024-09-20T18:55:46.265043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.834260925s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-20T18:55:46.265109Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-09-20T18:55:46.265138Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-525790","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.149:2380"],"advertise-client-urls":["https://192.168.39.149:2379"]}
	
	
	==> etcd [c5c19fcb571e8a32048d366070b16f702a20dcc8c9e90f7efdbdbcf068bb31c1] <==
	{"level":"warn","ts":"2024-09-20T19:10:04.206530Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14721583357781341672,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-20T19:10:04.645053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-20T19:10:04.645093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-20T19:10:04.645132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d received MsgPreVoteResp from ba3e3e863cacc4d at term 3"}
	{"level":"info","ts":"2024-09-20T19:10:04.645147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d [logterm: 3, index: 3959] sent MsgPreVote request to c8e33f35ad636831 at term 3"}
	{"level":"warn","ts":"2024-09-20T19:10:04.707714Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14721583357781341672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-20T19:10:05.208922Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14721583357781341672,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-20T19:10:05.645693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-20T19:10:05.645751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-20T19:10:05.645765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d received MsgPreVoteResp from ba3e3e863cacc4d at term 3"}
	{"level":"info","ts":"2024-09-20T19:10:05.645789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d [logterm: 3, index: 3959] sent MsgPreVote request to c8e33f35ad636831 at term 3"}
	{"level":"warn","ts":"2024-09-20T19:10:05.709342Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14721583357781341672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-20T19:10:06.209479Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14721583357781341672,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-20T19:10:06.645608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-20T19:10:06.645764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-20T19:10:06.645804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d received MsgPreVoteResp from ba3e3e863cacc4d at term 3"}
	{"level":"info","ts":"2024-09-20T19:10:06.645838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d [logterm: 3, index: 3959] sent MsgPreVote request to c8e33f35ad636831 at term 3"}
	{"level":"warn","ts":"2024-09-20T19:10:06.710031Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14721583357781341672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-20T19:10:07.211222Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14721583357781341672,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-20T19:10:07.645666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-20T19:10:07.645706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-20T19:10:07.645718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d received MsgPreVoteResp from ba3e3e863cacc4d at term 3"}
	{"level":"info","ts":"2024-09-20T19:10:07.645731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d [logterm: 3, index: 3959] sent MsgPreVote request to c8e33f35ad636831 at term 3"}
	{"level":"warn","ts":"2024-09-20T19:10:07.712188Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14721583357781341672,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-20T19:10:08.212853Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14721583357781341672,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 19:10:08 up 23 min,  0 users,  load average: 0.38, 0.69, 0.48
	Linux ha-525790 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [041c8157b3922d976470968a7d805abcf93da72e8542bc6a01ac49bce0a281c4] <==
	I0920 19:09:28.905770       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:09:28.905799       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:09:38.911165       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:09:38.911397       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:09:38.911578       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 19:09:38.911604       1 main.go:299] handling current node
	I0920 19:09:38.911642       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 19:09:38.911660       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 19:09:48.905176       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 19:09:48.905234       1 main.go:299] handling current node
	I0920 19:09:48.905325       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 19:09:48.905333       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 19:09:48.905524       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:09:48.905551       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 19:09:58.905246       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 19:09:58.905347       1 main.go:299] handling current node
	I0920 19:09:58.905361       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 19:09:58.905396       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 19:09:58.905536       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 19:09:58.905542       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	W0920 19:10:00.069842       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3359": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	I0920 19:10:00.069972       1 trace.go:236] Trace[1449355099]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (20-Sep-2024 19:09:48.183) (total time: 11886ms):
	Trace[1449355099]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3359": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF 11886ms (19:10:00.069)
	Trace[1449355099]: [11.886311203s] [11.886311203s] END
	E0920 19:10:00.069994       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=3359": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> kindnet [5579930bef0fc52712a38868fdf168669cdc0f524ecd83f4493491722fea0e98] <==
	I0920 18:55:25.880711       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:55:25.880821       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:55:25.880982       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:55:25.881006       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:55:25.881062       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:55:25.881173       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	I0920 18:55:25.881330       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:55:25.881366       1 main.go:299] handling current node
	I0920 18:55:35.880574       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:55:35.880753       1 main.go:299] handling current node
	I0920 18:55:35.880788       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:55:35.880809       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:55:35.880968       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:55:35.881037       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:55:35.881188       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:55:35.881225       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	E0920 18:55:44.415378       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	I0920 18:55:45.880519       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0920 18:55:45.880573       1 main.go:299] handling current node
	I0920 18:55:45.880594       1 main.go:295] Handling node with IPs: map[192.168.39.246:{}]
	I0920 18:55:45.880614       1 main.go:322] Node ha-525790-m02 has CIDR [10.244.1.0/24] 
	I0920 18:55:45.880735       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0920 18:55:45.880740       1 main.go:322] Node ha-525790-m03 has CIDR [10.244.2.0/24] 
	I0920 18:55:45.880784       1 main.go:295] Handling node with IPs: map[192.168.39.181:{}]
	I0920 18:55:45.880788       1 main.go:322] Node ha-525790-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ba3571b7729068fde6139487b328bb566b050fb1d9fcd4cd1ed13963c333c6af] <==
	E0920 19:09:56.787320       1 cacher.go:478] cacher (customresourcedefinitions.apiextensions.k8s.io): unexpected ListAndWatch error: failed to list *apiextensions.CustomResourceDefinition: etcdserver: request timed out; reinitializing...
	W0920 19:09:56.751194       1 reflector.go:561] storage/cacher.go:/persistentvolumeclaims: failed to list *core.PersistentVolumeClaim: etcdserver: request timed out
	E0920 19:09:56.787328       1 cacher.go:478] cacher (persistentvolumeclaims): unexpected ListAndWatch error: failed to list *core.PersistentVolumeClaim: etcdserver: request timed out; reinitializing...
	W0920 19:09:56.751212       1 reflector.go:561] storage/cacher.go:/ingressclasses: failed to list *networking.IngressClass: etcdserver: request timed out
	E0920 19:09:56.787337       1 cacher.go:478] cacher (ingressclasses.networking.k8s.io): unexpected ListAndWatch error: failed to list *networking.IngressClass: etcdserver: request timed out; reinitializing...
	F0920 19:09:56.751247       1 hooks.go:210] PostStartHook "rbac/bootstrap-roles" failed: unable to initialize roles: timed out waiting for the condition
	E0920 19:09:56.809199       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0920 19:09:56.833335       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0920 19:09:56.833565       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0920 19:09:56.833638       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0920 19:09:56.833694       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0920 19:09:56.833759       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	W0920 19:09:56.844706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ClusterRoleBinding: etcdserver: request timed out
	E0920 19:09:56.844755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ClusterRoleBinding: failed to list *v1.ClusterRoleBinding: etcdserver: request timed out" logger="UnhandledError"
	E0920 19:09:56.844966       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	W0920 19:09:56.845059       1 reflector.go:561] storage/cacher.go:/services/endpoints: failed to list *core.Endpoints: etcdserver: request timed out
	E0920 19:09:56.845087       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	E0920 19:09:56.787164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PriorityLevelConfiguration: failed to list *v1.PriorityLevelConfiguration: etcdserver: request timed out" logger="UnhandledError"
	E0920 19:09:56.845092       1 cacher.go:478] cacher (endpoints): unexpected ListAndWatch error: failed to list *core.Endpoints: etcdserver: request timed out; reinitializing...
	W0920 19:09:56.751084       1 reflector.go:561] storage/cacher.go:/prioritylevelconfigurations: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out
	W0920 19:09:56.833473       1 reflector.go:561] storage/cacher.go:/pods: failed to list *core.Pod: etcdserver: request timed out
	W0920 19:09:56.833506       1 reflector.go:561] storage/cacher.go:/resourcequotas: failed to list *core.ResourceQuota: etcdserver: request timed out
	W0920 19:09:56.833537       1 reflector.go:561] storage/cacher.go:/apiregistration.k8s.io/apiservices: failed to list *apiregistration.APIService: etcdserver: request timed out
	W0920 19:09:56.833804       1 reflector.go:561] storage/cacher.go:/persistentvolumes: failed to list *core.PersistentVolume: etcdserver: request timed out
	W0920 19:09:56.750932       1 reflector.go:561] storage/cacher.go:/leases: failed to list *coordination.Lease: etcdserver: request timed out
	
	
	==> kube-controller-manager [231315ec7d013fd90fbc26e6a4f8fc59d177b677f3ec717caa70d251b84de829] <==
	I0920 18:57:29.220757       1 serving.go:386] Generated self-signed cert in-memory
	I0920 18:57:29.591669       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0920 18:57:29.591756       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:57:29.593495       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 18:57:29.593650       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 18:57:29.593717       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0920 18:57:29.593811       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0920 18:57:49.858234       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.149:8443/healthz\": dial tcp 192.168.39.149:8443: connect: connection refused"
	
	
	==> kube-controller-manager [d017a5b283a90948c66d42e738cadc0b0558d24b471a758eba6ba1ba1d8f7c38] <==
	W0920 19:10:04.494245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.LimitRange: Get "https://192.168.39.149:8443/api/v1/limitranges?resourceVersion=3387": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:10:04.494385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.LimitRange: failed to list *v1.LimitRange: Get \"https://192.168.39.149:8443/api/v1/limitranges?resourceVersion=3387\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:10:04.746566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.149:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=3309": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:10:04.746693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.149:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=3309\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:10:05.615652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.149:8443/apis/apps/v1/statefulsets?resourceVersion=3387": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:10:05.615748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.149:8443/apis/apps/v1/statefulsets?resourceVersion=3387\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:10:05.787887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.149:8443/apis/apps/v1/replicasets?resourceVersion=3387": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:10:05.787987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.149:8443/apis/apps/v1/replicasets?resourceVersion=3387\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:10:05.869830       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.149:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:10:05.922008       1 gc_controller.go:151] "Failed to get node" err="node \"ha-525790-m03\" not found" logger="pod-garbage-collector-controller" node="ha-525790-m03"
	E0920 19:10:05.922055       1 gc_controller.go:151] "Failed to get node" err="node \"ha-525790-m03\" not found" logger="pod-garbage-collector-controller" node="ha-525790-m03"
	E0920 19:10:05.922075       1 gc_controller.go:151] "Failed to get node" err="node \"ha-525790-m03\" not found" logger="pod-garbage-collector-controller" node="ha-525790-m03"
	E0920 19:10:05.922081       1 gc_controller.go:151] "Failed to get node" err="node \"ha-525790-m03\" not found" logger="pod-garbage-collector-controller" node="ha-525790-m03"
	E0920 19:10:05.922087       1 gc_controller.go:151] "Failed to get node" err="node \"ha-525790-m03\" not found" logger="pod-garbage-collector-controller" node="ha-525790-m03"
	W0920 19:10:05.922607       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.149:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.149:8443: connect: connection refused
	W0920 19:10:06.352764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodTemplate: Get "https://192.168.39.149:8443/api/v1/podtemplates?resourceVersion=3387": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:10:06.352856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodTemplate: failed to list *v1.PodTemplate: Get \"https://192.168.39.149:8443/api/v1/podtemplates?resourceVersion=3387\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:10:06.370185       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.149:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.149:8443: connect: connection refused
	W0920 19:10:06.423763       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.149:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.149:8443: connect: connection refused
	W0920 19:10:07.371631       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.149:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.149:8443: connect: connection refused
	W0920 19:10:07.424682       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.149:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.149:8443: connect: connection refused
	W0920 19:10:07.808017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ResourceQuota: Get "https://192.168.39.149:8443/api/v1/resourcequotas?resourceVersion=3387": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:10:07.808077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ResourceQuota: failed to list *v1.ResourceQuota: Get \"https://192.168.39.149:8443/api/v1/resourcequotas?resourceVersion=3387\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:10:07.970412       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.149:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?resourceVersion=3387": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:10:07.970474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PriorityLevelConfiguration: failed to list *v1.PriorityLevelConfiguration: Get \"https://192.168.39.149:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?resourceVersion=3387\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-proxy [3d469134674c281ba827b084e4a110ba92d2ed1fa20f56bc60dbfa6aa0ceb5a8] <==
	E0920 18:54:29.445959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:32.517585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:32.517837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:32.517758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:32.518080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:38.533704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:38.533801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:44.681435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:44.681569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:47.750519       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:47.750703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:54:50.823395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:54:50.823514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:03.112602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:03.112697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:06.182242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:06.182543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:12.325811       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:12.325963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:30.759875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:30.760470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1679\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:46.118426       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:46.118557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=1677\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 18:55:46.118676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 18:55:46.118740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1705\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [fefbc436d3eff2c5abd1684c59842894a743e26f9eb3ca974dbbc048112157d3] <==
	E0920 18:57:52.070869       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0920 18:58:09.389517       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.149"]
	E0920 18:58:09.389799       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:58:09.436483       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 18:58:09.436606       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 18:58:09.436666       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:58:09.441038       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:58:09.441633       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:58:09.442010       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:58:09.446213       1 config.go:199] "Starting service config controller"
	I0920 18:58:09.446402       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:58:09.446500       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:58:09.446578       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:58:09.448018       1 config.go:328] "Starting node config controller"
	I0920 18:58:09.448140       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:58:09.548198       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:58:09.548215       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:58:09.548482       1 shared_informer.go:320] Caches are synced for node config
	E0920 19:09:16.358571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3373&timeout=6m33s&timeoutSeconds=393&watch=true\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0920 19:09:22.503339       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3291&timeout=8m11s&timeoutSeconds=491&watch=true\": dial tcp 192.168.39.254:8443: connect: no route to host - error from a previous attempt: dial tcp 192.168.39.254:8443: i/o timeout" logger="UnhandledError"
	E0920 19:09:25.574066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dha-525790&resourceVersion=3282&timeout=7m7s&timeoutSeconds=427&watch=true\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 19:09:56.293979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3373": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 19:09:56.294235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3373\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 19:10:02.439248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=3282": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 19:10:02.439452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-525790&resourceVersion=3282\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [6cf18d395747b302270fce67fcca779ef04ced72182d681e473e9fc81edb7b5a] <==
	E0920 19:09:37.515820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:43.251868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 19:09:43.251983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:45.357462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:09:45.357590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:50.377484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 19:09:50.377544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:50.509775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 19:09:50.509923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:50.940898       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:09:50.941115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:51.208390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 19:09:51.208636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:53.066793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 19:09:53.066942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:09:57.872350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.149:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=3293": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:09:57.872443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.149:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=3293\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:09:58.795487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.149:8443/apis/apps/v1/replicasets?resourceVersion=3387": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:09:58.795622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.149:8443/apis/apps/v1/replicasets?resourceVersion=3387\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:10:00.773827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.149:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&resourceVersion=3387": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:10:00.773911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.149:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&resourceVersion=3387\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:10:05.829526       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.149:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=3387": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:10:05.829672       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.149:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=3387\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:10:07.109769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.149:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=3387": dial tcp 192.168.39.149:8443: connect: connection refused
	E0920 19:10:07.109818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.149:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=3387\": dial tcp 192.168.39.149:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-scheduler [7d0496391eb85bc5bf9184bbb6298b5c312a0b6ed802603ee0d09c7f78fb9706] <==
	I0920 18:50:26.263699       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w98cx" node="ha-525790-m04"
	E0920 18:50:26.297985       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298064       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9ff40332-cdad-4e9f-99ca-28d1271713a8(kube-system/kindnet-hwgsh) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-hwgsh"
	E0920 18:50:26.298079       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hwgsh\": pod kindnet-hwgsh is already assigned to node \"ha-525790-m04\"" pod="kube-system/kindnet-hwgsh"
	I0920 18:50:26.298095       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hwgsh" node="ha-525790-m04"
	E0920 18:50:26.298461       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	E0920 18:50:26.298512       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 340d5abf-2e79-4cc0-8f1f-130c1e176259(kube-system/kube-proxy-rh89s) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rh89s"
	E0920 18:50:26.298529       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rh89s\": pod kube-proxy-rh89s is already assigned to node \"ha-525790-m04\"" pod="kube-system/kube-proxy-rh89s"
	I0920 18:50:26.298548       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rh89s" node="ha-525790-m04"
	E0920 18:55:33.838133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0920 18:55:34.010012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:34.163933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:35.197228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:38.126361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0920 18:55:38.323639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0920 18:55:39.518704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0920 18:55:39.859524       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0920 18:55:40.061646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0920 18:55:40.131765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0920 18:55:41.452147       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0920 18:55:43.449377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0920 18:55:44.076439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0920 18:55:44.277830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0920 18:55:45.932626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0920 18:55:46.167365       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 20 19:09:59 ha-525790 kubelet[1305]: E0920 19:09:59.365772    1305 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=3222\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 20 19:09:59 ha-525790 kubelet[1305]: W0920 19:09:59.365841    1305 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-525790&resourceVersion=3366": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 20 19:09:59 ha-525790 kubelet[1305]: E0920 19:09:59.365867    1305 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-525790&resourceVersion=3366\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 20 19:09:59 ha-525790 kubelet[1305]: I0920 19:09:59.365915    1305 status_manager.go:851] "Failed to get status for pod" podUID="09c07a212745d10d359109606d1f8e5a" pod="kube-system/kube-apiserver-ha-525790" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 20 19:09:59 ha-525790 kubelet[1305]: E0920 19:09:59.366366    1305 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-09-20T19:09:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-20T19:09:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-20T19:09:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-20T19:09:57Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ha-525790\": Patch \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790/status?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 20 19:09:59 ha-525790 kubelet[1305]: I0920 19:09:59.961349    1305 scope.go:117] "RemoveContainer" containerID="ba3571b7729068fde6139487b328bb566b050fb1d9fcd4cd1ed13963c333c6af"
	Sep 20 19:09:59 ha-525790 kubelet[1305]: E0920 19:09:59.961634    1305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-525790_kube-system(09c07a212745d10d359109606d1f8e5a)\"" pod="kube-system/kube-apiserver-ha-525790" podUID="09c07a212745d10d359109606d1f8e5a"
	Sep 20 19:10:00 ha-525790 kubelet[1305]: E0920 19:10:00.053142    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859400052799315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:10:00 ha-525790 kubelet[1305]: E0920 19:10:00.053187    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859400052799315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:10:02 ha-525790 kubelet[1305]: I0920 19:10:02.437768    1305 status_manager.go:851] "Failed to get status for pod" podUID="a2b3e6b5917d1f11b27828fbc85076e4" pod="kube-system/etcd-ha-525790" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 20 19:10:02 ha-525790 kubelet[1305]: W0920 19:10:02.438320    1305 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=3222": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 20 19:10:02 ha-525790 kubelet[1305]: E0920 19:10:02.438447    1305 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=3222\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 20 19:10:02 ha-525790 kubelet[1305]: E0920 19:10:02.438459    1305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-525790?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Sep 20 19:10:02 ha-525790 kubelet[1305]: E0920 19:10:02.437786    1305 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-525790\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 20 19:10:05 ha-525790 kubelet[1305]: E0920 19:10:05.509672    1305 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-525790.17f70898370b73b5\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-525790.17f70898370b73b5  kube-system   1810 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-525790,UID:09c07a212745d10d359109606d1f8e5a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-525790,},FirstTimestamp:2024-09-20 18:53:51 +0000 UTC,LastTimestamp:2024-09-20 19:07:21.966241424 +0000 UTC m=+1202.509763063,Count:9,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Serie
s:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-525790,}"
	Sep 20 19:10:05 ha-525790 kubelet[1305]: I0920 19:10:05.509865    1305 status_manager.go:851] "Failed to get status for pod" podUID="09c07a212745d10d359109606d1f8e5a" pod="kube-system/kube-apiserver-ha-525790" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-525790\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 20 19:10:05 ha-525790 kubelet[1305]: E0920 19:10:05.510213    1305 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-525790\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 20 19:10:05 ha-525790 kubelet[1305]: I0920 19:10:05.557507    1305 scope.go:117] "RemoveContainer" containerID="ba3571b7729068fde6139487b328bb566b050fb1d9fcd4cd1ed13963c333c6af"
	Sep 20 19:10:05 ha-525790 kubelet[1305]: E0920 19:10:05.557842    1305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-525790_kube-system(09c07a212745d10d359109606d1f8e5a)\"" pod="kube-system/kube-apiserver-ha-525790" podUID="09c07a212745d10d359109606d1f8e5a"
	Sep 20 19:10:06 ha-525790 kubelet[1305]: I0920 19:10:06.616352    1305 scope.go:117] "RemoveContainer" containerID="68cc5ea6e0ba5d41fc6dcbf5c17e4a2a946a42ecf180623a73a5cde5c439f527"
	Sep 20 19:10:06 ha-525790 kubelet[1305]: E0920 19:10:06.616540    1305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea6bf34f-c1f7-4216-a61f-be30846c991b)\"" pod="kube-system/storage-provisioner" podUID="ea6bf34f-c1f7-4216-a61f-be30846c991b"
	Sep 20 19:10:08 ha-525790 kubelet[1305]: W0920 19:10:08.581877    1305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=3315": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 20 19:10:08 ha-525790 kubelet[1305]: E0920 19:10:08.581987    1305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=3315\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 20 19:10:08 ha-525790 kubelet[1305]: E0920 19:10:08.582099    1305 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-525790\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-525790?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 20 19:10:08 ha-525790 kubelet[1305]: I0920 19:10:08.582464    1305 status_manager.go:851] "Failed to get status for pod" podUID="ea6bf34f-c1f7-4216-a61f-be30846c991b" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:10:07.572458  772610 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19678-739831/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-525790 -n ha-525790
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-525790 -n ha-525790: exit status 2 (223.6328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-525790" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (175.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-756894
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-756894
E0920 19:24:27.246959  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:24:38.974123  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:24:55.905957  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-756894: exit status 82 (2m1.82190238s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-756894-m03"  ...
	* Stopping node "multinode-756894-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-756894" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756894 --wait=true -v=8 --alsologtostderr
E0920 19:26:24.183992  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756894 --wait=true -v=8 --alsologtostderr: (3m23.332216485s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-756894
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-756894 -n multinode-756894
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-756894 logs -n 25: (1.472704409s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp multinode-756894-m02:/home/docker/cp-test.txt                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3797588952/001/cp-test_multinode-756894-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp multinode-756894-m02:/home/docker/cp-test.txt                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894:/home/docker/cp-test_multinode-756894-m02_multinode-756894.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n multinode-756894 sudo cat                                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-756894-m02_multinode-756894.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp multinode-756894-m02:/home/docker/cp-test.txt                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03:/home/docker/cp-test_multinode-756894-m02_multinode-756894-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n multinode-756894-m03 sudo cat                                   | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-756894-m02_multinode-756894-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp testdata/cp-test.txt                                                | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp multinode-756894-m03:/home/docker/cp-test.txt                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3797588952/001/cp-test_multinode-756894-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp multinode-756894-m03:/home/docker/cp-test.txt                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894:/home/docker/cp-test_multinode-756894-m03_multinode-756894.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n multinode-756894 sudo cat                                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-756894-m03_multinode-756894.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp multinode-756894-m03:/home/docker/cp-test.txt                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m02:/home/docker/cp-test_multinode-756894-m03_multinode-756894-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n multinode-756894-m02 sudo cat                                   | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-756894-m03_multinode-756894-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-756894 node stop m03                                                          | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	| node    | multinode-756894 node start                                                             | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-756894                                                                | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC |                     |
	| stop    | -p multinode-756894                                                                     | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC |                     |
	| start   | -p multinode-756894                                                                     | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:28 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-756894                                                                | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:28 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:25:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:25:21.724886  782266 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:25:21.725016  782266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:21.725027  782266 out.go:358] Setting ErrFile to fd 2...
	I0920 19:25:21.725033  782266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:21.725242  782266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 19:25:21.725833  782266 out.go:352] Setting JSON to false
	I0920 19:25:21.726945  782266 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11272,"bootTime":1726849050,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:25:21.727058  782266 start.go:139] virtualization: kvm guest
	I0920 19:25:21.730068  782266 out.go:177] * [multinode-756894] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:25:21.731556  782266 notify.go:220] Checking for updates...
	I0920 19:25:21.731617  782266 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:25:21.733259  782266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:25:21.734632  782266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 19:25:21.735915  782266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 19:25:21.737540  782266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:25:21.738918  782266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:25:21.740601  782266 config.go:182] Loaded profile config "multinode-756894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:21.740725  782266 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:25:21.741244  782266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:25:21.741299  782266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:25:21.756766  782266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44201
	I0920 19:25:21.757319  782266 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:25:21.757902  782266 main.go:141] libmachine: Using API Version  1
	I0920 19:25:21.757925  782266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:25:21.758315  782266 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:25:21.758499  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:25:21.794138  782266 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:25:21.795633  782266 start.go:297] selected driver: kvm2
	I0920 19:25:21.795654  782266 start.go:901] validating driver "kvm2" against &{Name:multinode-756894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-756894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:25:21.795792  782266 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:25:21.796175  782266 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:25:21.796272  782266 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:25:21.811914  782266 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:25:21.812633  782266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:25:21.812664  782266 cni.go:84] Creating CNI manager for ""
	I0920 19:25:21.812736  782266 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 19:25:21.812807  782266 start.go:340] cluster config:
	{Name:multinode-756894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-756894 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:25:21.812967  782266 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:25:21.815324  782266 out.go:177] * Starting "multinode-756894" primary control-plane node in "multinode-756894" cluster
	I0920 19:25:21.816500  782266 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:21.816553  782266 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 19:25:21.816568  782266 cache.go:56] Caching tarball of preloaded images
	I0920 19:25:21.816642  782266 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:25:21.816655  782266 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:25:21.816827  782266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/config.json ...
	I0920 19:25:21.817044  782266 start.go:360] acquireMachinesLock for multinode-756894: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:25:21.817103  782266 start.go:364] duration metric: took 38.186µs to acquireMachinesLock for "multinode-756894"
	I0920 19:25:21.817124  782266 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:25:21.817130  782266 fix.go:54] fixHost starting: 
	I0920 19:25:21.817442  782266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:25:21.817475  782266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:25:21.831894  782266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I0920 19:25:21.832292  782266 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:25:21.832749  782266 main.go:141] libmachine: Using API Version  1
	I0920 19:25:21.832775  782266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:25:21.833212  782266 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:25:21.833419  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:25:21.833580  782266 main.go:141] libmachine: (multinode-756894) Calling .GetState
	I0920 19:25:21.835220  782266 fix.go:112] recreateIfNeeded on multinode-756894: state=Running err=<nil>
	W0920 19:25:21.835241  782266 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:25:21.840897  782266 out.go:177] * Updating the running kvm2 "multinode-756894" VM ...
	I0920 19:25:21.845426  782266 machine.go:93] provisionDockerMachine start ...
	I0920 19:25:21.845453  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:25:21.845697  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:25:21.848439  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:21.848911  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:21.848944  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:21.849093  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:25:21.849266  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:21.849405  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:21.849542  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:25:21.849668  782266 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:21.849875  782266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0920 19:25:21.849895  782266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:25:21.960038  782266 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-756894
	
	I0920 19:25:21.960069  782266 main.go:141] libmachine: (multinode-756894) Calling .GetMachineName
	I0920 19:25:21.960301  782266 buildroot.go:166] provisioning hostname "multinode-756894"
	I0920 19:25:21.960351  782266 main.go:141] libmachine: (multinode-756894) Calling .GetMachineName
	I0920 19:25:21.960582  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:25:21.963232  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:21.963576  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:21.963604  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:21.963755  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:25:21.963943  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:21.964120  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:21.964268  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:25:21.964434  782266 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:21.964634  782266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0920 19:25:21.964650  782266 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-756894 && echo "multinode-756894" | sudo tee /etc/hostname
	I0920 19:25:22.086424  782266 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-756894
	
	I0920 19:25:22.086476  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:25:22.089401  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.089699  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:22.089730  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.089976  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:25:22.090185  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:22.090374  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:22.090540  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:25:22.090706  782266 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:22.090974  782266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0920 19:25:22.090998  782266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-756894' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-756894/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-756894' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:25:22.204031  782266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:25:22.204063  782266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 19:25:22.204106  782266 buildroot.go:174] setting up certificates
	I0920 19:25:22.204128  782266 provision.go:84] configureAuth start
	I0920 19:25:22.204145  782266 main.go:141] libmachine: (multinode-756894) Calling .GetMachineName
	I0920 19:25:22.204406  782266 main.go:141] libmachine: (multinode-756894) Calling .GetIP
	I0920 19:25:22.207156  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.207506  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:22.207530  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.207664  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:25:22.209875  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.210210  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:22.210250  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.210354  782266 provision.go:143] copyHostCerts
	I0920 19:25:22.210385  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 19:25:22.210431  782266 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 19:25:22.210447  782266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 19:25:22.210518  782266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 19:25:22.210601  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 19:25:22.210618  782266 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 19:25:22.210624  782266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 19:25:22.210655  782266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 19:25:22.210715  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 19:25:22.210731  782266 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 19:25:22.210736  782266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 19:25:22.210759  782266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 19:25:22.210818  782266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.multinode-756894 san=[127.0.0.1 192.168.39.168 localhost minikube multinode-756894]
	I0920 19:25:22.375032  782266 provision.go:177] copyRemoteCerts
	I0920 19:25:22.375099  782266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:25:22.375124  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:25:22.377912  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.378221  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:22.378252  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.378476  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:25:22.378691  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:22.378870  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:25:22.379022  782266 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/multinode-756894/id_rsa Username:docker}
	I0920 19:25:22.461814  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 19:25:22.461901  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:25:22.487621  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 19:25:22.487700  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0920 19:25:22.512583  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 19:25:22.512676  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 19:25:22.538002  782266 provision.go:87] duration metric: took 333.853129ms to configureAuth
	I0920 19:25:22.538034  782266 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:25:22.538311  782266 config.go:182] Loaded profile config "multinode-756894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:22.538407  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:25:22.541117  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.541449  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:22.541470  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.541666  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:25:22.541876  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:22.542027  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:22.542157  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:25:22.542345  782266 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:22.542520  782266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0920 19:25:22.542535  782266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:26:53.297866  782266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:26:53.297908  782266 machine.go:96] duration metric: took 1m31.452461199s to provisionDockerMachine
	I0920 19:26:53.297927  782266 start.go:293] postStartSetup for "multinode-756894" (driver="kvm2")
	I0920 19:26:53.297941  782266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:26:53.297960  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:26:53.298281  782266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:26:53.298308  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:26:53.301683  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.302134  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:26:53.302166  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.302345  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:26:53.302519  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:26:53.302663  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:26:53.302809  782266 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/multinode-756894/id_rsa Username:docker}
	I0920 19:26:53.387266  782266 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:26:53.391383  782266 command_runner.go:130] > NAME=Buildroot
	I0920 19:26:53.391405  782266 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0920 19:26:53.391412  782266 command_runner.go:130] > ID=buildroot
	I0920 19:26:53.391419  782266 command_runner.go:130] > VERSION_ID=2023.02.9
	I0920 19:26:53.391426  782266 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0920 19:26:53.391475  782266 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:26:53.391491  782266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 19:26:53.391576  782266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 19:26:53.391691  782266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 19:26:53.391705  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 19:26:53.391798  782266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:26:53.401643  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 19:26:53.427284  782266 start.go:296] duration metric: took 129.340801ms for postStartSetup
	I0920 19:26:53.427365  782266 fix.go:56] duration metric: took 1m31.610234241s for fixHost
	I0920 19:26:53.427424  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:26:53.430056  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.430537  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:26:53.430571  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.430735  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:26:53.430961  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:26:53.431104  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:26:53.431238  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:26:53.431376  782266 main.go:141] libmachine: Using SSH client type: native
	I0920 19:26:53.431539  782266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0920 19:26:53.431548  782266 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:26:53.539770  782266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726860413.517005152
	
	I0920 19:26:53.539800  782266 fix.go:216] guest clock: 1726860413.517005152
	I0920 19:26:53.539810  782266 fix.go:229] Guest: 2024-09-20 19:26:53.517005152 +0000 UTC Remote: 2024-09-20 19:26:53.427369408 +0000 UTC m=+91.740554816 (delta=89.635744ms)
	I0920 19:26:53.539862  782266 fix.go:200] guest clock delta is within tolerance: 89.635744ms
	I0920 19:26:53.539870  782266 start.go:83] releasing machines lock for "multinode-756894", held for 1m31.722753741s
	I0920 19:26:53.539898  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:26:53.540133  782266 main.go:141] libmachine: (multinode-756894) Calling .GetIP
	I0920 19:26:53.543025  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.543374  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:26:53.543409  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.543604  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:26:53.544241  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:26:53.544392  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:26:53.544482  782266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:26:53.544539  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:26:53.544611  782266 ssh_runner.go:195] Run: cat /version.json
	I0920 19:26:53.544637  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:26:53.547164  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.547428  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:26:53.547465  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.547572  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.547618  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:26:53.547771  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:26:53.547903  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:26:53.548024  782266 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/multinode-756894/id_rsa Username:docker}
	I0920 19:26:53.548056  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:26:53.548081  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.548263  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:26:53.548425  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:26:53.548560  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:26:53.548726  782266 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/multinode-756894/id_rsa Username:docker}
	I0920 19:26:53.655461  782266 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0920 19:26:53.655523  782266 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "fcd4ba3dbb1ef408e3a4b79c864df2496ddd3848"}
	I0920 19:26:53.655649  782266 ssh_runner.go:195] Run: systemctl --version
	I0920 19:26:53.661895  782266 command_runner.go:130] > systemd 252 (252)
	I0920 19:26:53.661929  782266 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0920 19:26:53.661988  782266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:26:53.824376  782266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 19:26:53.830208  782266 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0920 19:26:53.830400  782266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:26:53.830459  782266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:26:53.839724  782266 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 19:26:53.839752  782266 start.go:495] detecting cgroup driver to use...
	I0920 19:26:53.839823  782266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:26:53.856427  782266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:26:53.871269  782266 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:26:53.871335  782266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:26:53.885564  782266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:26:53.899946  782266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:26:54.053928  782266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:26:54.189782  782266 docker.go:233] disabling docker service ...
	I0920 19:26:54.189882  782266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:26:54.206901  782266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:26:54.220619  782266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:26:54.373767  782266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:26:54.524979  782266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:26:54.541881  782266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:26:54.562558  782266 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0920 19:26:54.562600  782266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:26:54.562657  782266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.575136  782266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:26:54.575208  782266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.587540  782266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.598353  782266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.608973  782266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:26:54.619996  782266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.630578  782266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.641566  782266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.652175  782266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:26:54.662399  782266 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0920 19:26:54.662467  782266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:26:54.671994  782266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:26:54.808823  782266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:27:01.183060  782266 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.374198752s)
	I0920 19:27:01.183091  782266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:27:01.183143  782266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:27:01.188216  782266 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0920 19:27:01.188238  782266 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0920 19:27:01.188246  782266 command_runner.go:130] > Device: 0,22	Inode: 1303        Links: 1
	I0920 19:27:01.188257  782266 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 19:27:01.188266  782266 command_runner.go:130] > Access: 2024-09-20 19:27:01.049940129 +0000
	I0920 19:27:01.188279  782266 command_runner.go:130] > Modify: 2024-09-20 19:27:01.049940129 +0000
	I0920 19:27:01.188285  782266 command_runner.go:130] > Change: 2024-09-20 19:27:01.049940129 +0000
	I0920 19:27:01.188291  782266 command_runner.go:130] >  Birth: -
	I0920 19:27:01.188337  782266 start.go:563] Will wait 60s for crictl version
	I0920 19:27:01.188391  782266 ssh_runner.go:195] Run: which crictl
	I0920 19:27:01.192043  782266 command_runner.go:130] > /usr/bin/crictl
	I0920 19:27:01.192121  782266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:27:01.227645  782266 command_runner.go:130] > Version:  0.1.0
	I0920 19:27:01.227671  782266 command_runner.go:130] > RuntimeName:  cri-o
	I0920 19:27:01.227704  782266 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0920 19:27:01.227726  782266 command_runner.go:130] > RuntimeApiVersion:  v1
	I0920 19:27:01.228927  782266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:27:01.229017  782266 ssh_runner.go:195] Run: crio --version
	I0920 19:27:01.255656  782266 command_runner.go:130] > crio version 1.29.1
	I0920 19:27:01.255679  782266 command_runner.go:130] > Version:        1.29.1
	I0920 19:27:01.255685  782266 command_runner.go:130] > GitCommit:      unknown
	I0920 19:27:01.255689  782266 command_runner.go:130] > GitCommitDate:  unknown
	I0920 19:27:01.255693  782266 command_runner.go:130] > GitTreeState:   clean
	I0920 19:27:01.255699  782266 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0920 19:27:01.255703  782266 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 19:27:01.255707  782266 command_runner.go:130] > Compiler:       gc
	I0920 19:27:01.255716  782266 command_runner.go:130] > Platform:       linux/amd64
	I0920 19:27:01.255721  782266 command_runner.go:130] > Linkmode:       dynamic
	I0920 19:27:01.255725  782266 command_runner.go:130] > BuildTags:      
	I0920 19:27:01.255729  782266 command_runner.go:130] >   containers_image_ostree_stub
	I0920 19:27:01.255733  782266 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 19:27:01.255737  782266 command_runner.go:130] >   btrfs_noversion
	I0920 19:27:01.255742  782266 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 19:27:01.255746  782266 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 19:27:01.255752  782266 command_runner.go:130] >   seccomp
	I0920 19:27:01.255756  782266 command_runner.go:130] > LDFlags:          unknown
	I0920 19:27:01.255761  782266 command_runner.go:130] > SeccompEnabled:   true
	I0920 19:27:01.255764  782266 command_runner.go:130] > AppArmorEnabled:  false
	I0920 19:27:01.256964  782266 ssh_runner.go:195] Run: crio --version
	I0920 19:27:01.287442  782266 command_runner.go:130] > crio version 1.29.1
	I0920 19:27:01.287474  782266 command_runner.go:130] > Version:        1.29.1
	I0920 19:27:01.287483  782266 command_runner.go:130] > GitCommit:      unknown
	I0920 19:27:01.287489  782266 command_runner.go:130] > GitCommitDate:  unknown
	I0920 19:27:01.287495  782266 command_runner.go:130] > GitTreeState:   clean
	I0920 19:27:01.287503  782266 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0920 19:27:01.287510  782266 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 19:27:01.287518  782266 command_runner.go:130] > Compiler:       gc
	I0920 19:27:01.287525  782266 command_runner.go:130] > Platform:       linux/amd64
	I0920 19:27:01.287533  782266 command_runner.go:130] > Linkmode:       dynamic
	I0920 19:27:01.287545  782266 command_runner.go:130] > BuildTags:      
	I0920 19:27:01.287556  782266 command_runner.go:130] >   containers_image_ostree_stub
	I0920 19:27:01.287563  782266 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 19:27:01.287569  782266 command_runner.go:130] >   btrfs_noversion
	I0920 19:27:01.287580  782266 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 19:27:01.287586  782266 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 19:27:01.287591  782266 command_runner.go:130] >   seccomp
	I0920 19:27:01.287606  782266 command_runner.go:130] > LDFlags:          unknown
	I0920 19:27:01.287615  782266 command_runner.go:130] > SeccompEnabled:   true
	I0920 19:27:01.287622  782266 command_runner.go:130] > AppArmorEnabled:  false
	I0920 19:27:01.290549  782266 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:27:01.292060  782266 main.go:141] libmachine: (multinode-756894) Calling .GetIP
	I0920 19:27:01.295096  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:27:01.295554  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:27:01.295591  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:27:01.295819  782266 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 19:27:01.299937  782266 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0920 19:27:01.300034  782266 kubeadm.go:883] updating cluster {Name:multinode-756894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-756894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:27:01.300202  782266 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:27:01.300255  782266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:27:01.351339  782266 command_runner.go:130] > {
	I0920 19:27:01.351365  782266 command_runner.go:130] >   "images": [
	I0920 19:27:01.351370  782266 command_runner.go:130] >     {
	I0920 19:27:01.351378  782266 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 19:27:01.351382  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.351388  782266 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 19:27:01.351407  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351412  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.351422  782266 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 19:27:01.351433  782266 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 19:27:01.351438  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351445  782266 command_runner.go:130] >       "size": "87190579",
	I0920 19:27:01.351452  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.351460  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.351468  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.351476  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.351479  782266 command_runner.go:130] >     },
	I0920 19:27:01.351482  782266 command_runner.go:130] >     {
	I0920 19:27:01.351488  782266 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 19:27:01.351492  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.351498  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 19:27:01.351501  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351506  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.351513  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 19:27:01.351526  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 19:27:01.351535  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351542  782266 command_runner.go:130] >       "size": "1363676",
	I0920 19:27:01.351552  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.351561  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.351570  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.351574  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.351579  782266 command_runner.go:130] >     },
	I0920 19:27:01.351582  782266 command_runner.go:130] >     {
	I0920 19:27:01.351597  782266 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 19:27:01.351603  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.351608  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 19:27:01.351614  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351632  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.351645  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 19:27:01.351664  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 19:27:01.351672  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351682  782266 command_runner.go:130] >       "size": "31470524",
	I0920 19:27:01.351689  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.351693  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.351697  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.351703  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.351709  782266 command_runner.go:130] >     },
	I0920 19:27:01.351717  782266 command_runner.go:130] >     {
	I0920 19:27:01.351727  782266 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 19:27:01.351735  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.351743  782266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 19:27:01.351748  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351755  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.351770  782266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 19:27:01.351793  782266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 19:27:01.351802  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351812  782266 command_runner.go:130] >       "size": "63273227",
	I0920 19:27:01.351819  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.351824  782266 command_runner.go:130] >       "username": "nonroot",
	I0920 19:27:01.351833  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.351842  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.351851  782266 command_runner.go:130] >     },
	I0920 19:27:01.351857  782266 command_runner.go:130] >     {
	I0920 19:27:01.351867  782266 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 19:27:01.351877  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.351887  782266 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 19:27:01.351895  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351904  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.351917  782266 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 19:27:01.351927  782266 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 19:27:01.351935  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351945  782266 command_runner.go:130] >       "size": "149009664",
	I0920 19:27:01.351956  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.351966  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.351974  782266 command_runner.go:130] >       },
	I0920 19:27:01.351983  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.351992  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.352002  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.352008  782266 command_runner.go:130] >     },
	I0920 19:27:01.352012  782266 command_runner.go:130] >     {
	I0920 19:27:01.352024  782266 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 19:27:01.352033  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.352042  782266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 19:27:01.352051  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352060  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.352074  782266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 19:27:01.352088  782266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 19:27:01.352097  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352103  782266 command_runner.go:130] >       "size": "95237600",
	I0920 19:27:01.352108  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.352114  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.352122  782266 command_runner.go:130] >       },
	I0920 19:27:01.352132  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.352139  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.352148  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.352154  782266 command_runner.go:130] >     },
	I0920 19:27:01.352162  782266 command_runner.go:130] >     {
	I0920 19:27:01.352191  782266 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 19:27:01.352206  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.352214  782266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 19:27:01.352220  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352227  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.352243  782266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 19:27:01.352258  782266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 19:27:01.352266  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352279  782266 command_runner.go:130] >       "size": "89437508",
	I0920 19:27:01.352287  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.352294  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.352302  782266 command_runner.go:130] >       },
	I0920 19:27:01.352307  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.352312  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.352318  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.352325  782266 command_runner.go:130] >     },
	I0920 19:27:01.352331  782266 command_runner.go:130] >     {
	I0920 19:27:01.352345  782266 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 19:27:01.352353  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.352362  782266 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 19:27:01.352370  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352377  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.352397  782266 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 19:27:01.352411  782266 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 19:27:01.352419  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352425  782266 command_runner.go:130] >       "size": "92733849",
	I0920 19:27:01.352434  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.352439  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.352445  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.352451  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.352455  782266 command_runner.go:130] >     },
	I0920 19:27:01.352460  782266 command_runner.go:130] >     {
	I0920 19:27:01.352468  782266 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 19:27:01.352473  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.352480  782266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 19:27:01.352484  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352489  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.352504  782266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 19:27:01.352514  782266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 19:27:01.352519  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352525  782266 command_runner.go:130] >       "size": "68420934",
	I0920 19:27:01.352532  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.352539  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.352544  782266 command_runner.go:130] >       },
	I0920 19:27:01.352551  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.352557  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.352563  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.352569  782266 command_runner.go:130] >     },
	I0920 19:27:01.352575  782266 command_runner.go:130] >     {
	I0920 19:27:01.352589  782266 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 19:27:01.352598  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.352606  782266 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 19:27:01.352614  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352627  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.352637  782266 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 19:27:01.352651  782266 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 19:27:01.352660  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352667  782266 command_runner.go:130] >       "size": "742080",
	I0920 19:27:01.352676  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.352683  782266 command_runner.go:130] >         "value": "65535"
	I0920 19:27:01.352692  782266 command_runner.go:130] >       },
	I0920 19:27:01.352701  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.352709  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.352713  782266 command_runner.go:130] >       "pinned": true
	I0920 19:27:01.352718  782266 command_runner.go:130] >     }
	I0920 19:27:01.352723  782266 command_runner.go:130] >   ]
	I0920 19:27:01.352732  782266 command_runner.go:130] > }
	I0920 19:27:01.352950  782266 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:27:01.352963  782266 crio.go:433] Images already preloaded, skipping extraction
	I0920 19:27:01.353021  782266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:27:01.387829  782266 command_runner.go:130] > {
	I0920 19:27:01.387850  782266 command_runner.go:130] >   "images": [
	I0920 19:27:01.387858  782266 command_runner.go:130] >     {
	I0920 19:27:01.387867  782266 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 19:27:01.387873  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.387890  782266 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 19:27:01.387894  782266 command_runner.go:130] >       ],
	I0920 19:27:01.387901  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.387908  782266 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 19:27:01.387915  782266 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 19:27:01.387925  782266 command_runner.go:130] >       ],
	I0920 19:27:01.387930  782266 command_runner.go:130] >       "size": "87190579",
	I0920 19:27:01.387934  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.387937  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.387945  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.387951  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.387954  782266 command_runner.go:130] >     },
	I0920 19:27:01.387958  782266 command_runner.go:130] >     {
	I0920 19:27:01.387966  782266 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 19:27:01.387972  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.387982  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 19:27:01.387987  782266 command_runner.go:130] >       ],
	I0920 19:27:01.387992  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388002  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 19:27:01.388017  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 19:27:01.388024  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388031  782266 command_runner.go:130] >       "size": "1363676",
	I0920 19:27:01.388035  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.388041  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388048  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388051  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388055  782266 command_runner.go:130] >     },
	I0920 19:27:01.388059  782266 command_runner.go:130] >     {
	I0920 19:27:01.388065  782266 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 19:27:01.388071  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388080  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 19:27:01.388086  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388091  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388098  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 19:27:01.388108  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 19:27:01.388111  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388115  782266 command_runner.go:130] >       "size": "31470524",
	I0920 19:27:01.388119  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.388123  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388127  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388131  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388135  782266 command_runner.go:130] >     },
	I0920 19:27:01.388138  782266 command_runner.go:130] >     {
	I0920 19:27:01.388144  782266 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 19:27:01.388150  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388155  782266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 19:27:01.388161  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388165  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388172  782266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 19:27:01.388182  782266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 19:27:01.388188  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388192  782266 command_runner.go:130] >       "size": "63273227",
	I0920 19:27:01.388196  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.388200  782266 command_runner.go:130] >       "username": "nonroot",
	I0920 19:27:01.388209  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388215  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388219  782266 command_runner.go:130] >     },
	I0920 19:27:01.388222  782266 command_runner.go:130] >     {
	I0920 19:27:01.388228  782266 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 19:27:01.388234  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388239  782266 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 19:27:01.388244  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388248  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388255  782266 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 19:27:01.388263  782266 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 19:27:01.388269  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388273  782266 command_runner.go:130] >       "size": "149009664",
	I0920 19:27:01.388277  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.388283  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.388286  782266 command_runner.go:130] >       },
	I0920 19:27:01.388291  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388296  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388300  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388305  782266 command_runner.go:130] >     },
	I0920 19:27:01.388308  782266 command_runner.go:130] >     {
	I0920 19:27:01.388314  782266 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 19:27:01.388320  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388325  782266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 19:27:01.388331  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388335  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388343  782266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 19:27:01.388353  782266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 19:27:01.388358  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388362  782266 command_runner.go:130] >       "size": "95237600",
	I0920 19:27:01.388367  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.388371  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.388374  782266 command_runner.go:130] >       },
	I0920 19:27:01.388378  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388382  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388386  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388389  782266 command_runner.go:130] >     },
	I0920 19:27:01.388393  782266 command_runner.go:130] >     {
	I0920 19:27:01.388401  782266 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 19:27:01.388405  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388412  782266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 19:27:01.388416  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388420  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388429  782266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 19:27:01.388440  782266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 19:27:01.388445  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388450  782266 command_runner.go:130] >       "size": "89437508",
	I0920 19:27:01.388454  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.388460  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.388463  782266 command_runner.go:130] >       },
	I0920 19:27:01.388467  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388471  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388474  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388478  782266 command_runner.go:130] >     },
	I0920 19:27:01.388481  782266 command_runner.go:130] >     {
	I0920 19:27:01.388488  782266 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 19:27:01.388492  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388497  782266 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 19:27:01.388500  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388504  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388520  782266 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 19:27:01.388529  782266 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 19:27:01.388533  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388537  782266 command_runner.go:130] >       "size": "92733849",
	I0920 19:27:01.388541  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.388545  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388549  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388553  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388557  782266 command_runner.go:130] >     },
	I0920 19:27:01.388560  782266 command_runner.go:130] >     {
	I0920 19:27:01.388566  782266 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 19:27:01.388573  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388577  782266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 19:27:01.388582  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388585  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388594  782266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 19:27:01.388602  782266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 19:27:01.388608  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388613  782266 command_runner.go:130] >       "size": "68420934",
	I0920 19:27:01.388616  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.388620  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.388623  782266 command_runner.go:130] >       },
	I0920 19:27:01.388627  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388631  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388634  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388637  782266 command_runner.go:130] >     },
	I0920 19:27:01.388641  782266 command_runner.go:130] >     {
	I0920 19:27:01.388646  782266 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 19:27:01.388652  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388656  782266 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 19:27:01.388660  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388664  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388672  782266 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 19:27:01.388682  782266 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 19:27:01.388688  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388692  782266 command_runner.go:130] >       "size": "742080",
	I0920 19:27:01.388695  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.388699  782266 command_runner.go:130] >         "value": "65535"
	I0920 19:27:01.388703  782266 command_runner.go:130] >       },
	I0920 19:27:01.388706  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388710  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388714  782266 command_runner.go:130] >       "pinned": true
	I0920 19:27:01.388717  782266 command_runner.go:130] >     }
	I0920 19:27:01.388720  782266 command_runner.go:130] >   ]
	I0920 19:27:01.388723  782266 command_runner.go:130] > }
	I0920 19:27:01.388836  782266 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:27:01.388847  782266 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:27:01.388873  782266 kubeadm.go:934] updating node { 192.168.39.168 8443 v1.31.1 crio true true} ...
	I0920 19:27:01.388985  782266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-756894 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-756894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:27:01.389050  782266 ssh_runner.go:195] Run: crio config
	I0920 19:27:01.424121  782266 command_runner.go:130] ! time="2024-09-20 19:27:01.401150239Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0920 19:27:01.429290  782266 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0920 19:27:01.440607  782266 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0920 19:27:01.440630  782266 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0920 19:27:01.440636  782266 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0920 19:27:01.440641  782266 command_runner.go:130] > #
	I0920 19:27:01.440655  782266 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0920 19:27:01.440665  782266 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0920 19:27:01.440674  782266 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0920 19:27:01.440685  782266 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0920 19:27:01.440694  782266 command_runner.go:130] > # reload'.
	I0920 19:27:01.440701  782266 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0920 19:27:01.440710  782266 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0920 19:27:01.440717  782266 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0920 19:27:01.440724  782266 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0920 19:27:01.440733  782266 command_runner.go:130] > [crio]
	I0920 19:27:01.440745  782266 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0920 19:27:01.440754  782266 command_runner.go:130] > # containers images, in this directory.
	I0920 19:27:01.440764  782266 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0920 19:27:01.440780  782266 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0920 19:27:01.440787  782266 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0920 19:27:01.440796  782266 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0920 19:27:01.440803  782266 command_runner.go:130] > # imagestore = ""
	I0920 19:27:01.440811  782266 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0920 19:27:01.440819  782266 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0920 19:27:01.440829  782266 command_runner.go:130] > storage_driver = "overlay"
	I0920 19:27:01.440847  782266 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0920 19:27:01.440859  782266 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0920 19:27:01.440867  782266 command_runner.go:130] > storage_option = [
	I0920 19:27:01.440874  782266 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0920 19:27:01.440879  782266 command_runner.go:130] > ]
	I0920 19:27:01.440885  782266 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0920 19:27:01.440893  782266 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0920 19:27:01.440897  782266 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0920 19:27:01.440905  782266 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0920 19:27:01.440915  782266 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0920 19:27:01.440925  782266 command_runner.go:130] > # always happen on a node reboot
	I0920 19:27:01.440936  782266 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0920 19:27:01.440951  782266 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0920 19:27:01.440964  782266 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0920 19:27:01.440974  782266 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0920 19:27:01.440982  782266 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0920 19:27:01.440989  782266 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0920 19:27:01.441001  782266 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0920 19:27:01.441011  782266 command_runner.go:130] > # internal_wipe = true
	I0920 19:27:01.441026  782266 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0920 19:27:01.441037  782266 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0920 19:27:01.441046  782266 command_runner.go:130] > # internal_repair = false
	I0920 19:27:01.441058  782266 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0920 19:27:01.441069  782266 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0920 19:27:01.441077  782266 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0920 19:27:01.441084  782266 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0920 19:27:01.441099  782266 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0920 19:27:01.441109  782266 command_runner.go:130] > [crio.api]
	I0920 19:27:01.441121  782266 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0920 19:27:01.441130  782266 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0920 19:27:01.441141  782266 command_runner.go:130] > # IP address on which the stream server will listen.
	I0920 19:27:01.441150  782266 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0920 19:27:01.441160  782266 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0920 19:27:01.441168  782266 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0920 19:27:01.441177  782266 command_runner.go:130] > # stream_port = "0"
	I0920 19:27:01.441186  782266 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0920 19:27:01.441196  782266 command_runner.go:130] > # stream_enable_tls = false
	I0920 19:27:01.441208  782266 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0920 19:27:01.441217  782266 command_runner.go:130] > # stream_idle_timeout = ""
	I0920 19:27:01.441229  782266 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0920 19:27:01.441241  782266 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0920 19:27:01.441247  782266 command_runner.go:130] > # minutes.
	I0920 19:27:01.441251  782266 command_runner.go:130] > # stream_tls_cert = ""
	I0920 19:27:01.441262  782266 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0920 19:27:01.441273  782266 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0920 19:27:01.441280  782266 command_runner.go:130] > # stream_tls_key = ""
	I0920 19:27:01.441293  782266 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0920 19:27:01.441305  782266 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0920 19:27:01.441326  782266 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0920 19:27:01.441333  782266 command_runner.go:130] > # stream_tls_ca = ""
	I0920 19:27:01.441343  782266 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 19:27:01.441353  782266 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0920 19:27:01.441367  782266 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 19:27:01.441377  782266 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0920 19:27:01.441387  782266 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0920 19:27:01.441398  782266 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0920 19:27:01.441407  782266 command_runner.go:130] > [crio.runtime]
	I0920 19:27:01.441417  782266 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0920 19:27:01.441426  782266 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0920 19:27:01.441435  782266 command_runner.go:130] > # "nofile=1024:2048"
	I0920 19:27:01.441447  782266 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0920 19:27:01.441458  782266 command_runner.go:130] > # default_ulimits = [
	I0920 19:27:01.441467  782266 command_runner.go:130] > # ]
	I0920 19:27:01.441479  782266 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0920 19:27:01.441487  782266 command_runner.go:130] > # no_pivot = false
	I0920 19:27:01.441499  782266 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0920 19:27:01.441508  782266 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0920 19:27:01.441519  782266 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0920 19:27:01.441531  782266 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0920 19:27:01.441542  782266 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0920 19:27:01.441556  782266 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 19:27:01.441566  782266 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0920 19:27:01.441576  782266 command_runner.go:130] > # Cgroup setting for conmon
	I0920 19:27:01.441587  782266 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0920 19:27:01.441596  782266 command_runner.go:130] > conmon_cgroup = "pod"
	I0920 19:27:01.441609  782266 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0920 19:27:01.441620  782266 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0920 19:27:01.441633  782266 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 19:27:01.441641  782266 command_runner.go:130] > conmon_env = [
	I0920 19:27:01.441653  782266 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 19:27:01.441660  782266 command_runner.go:130] > ]
	I0920 19:27:01.441668  782266 command_runner.go:130] > # Additional environment variables to set for all the
	I0920 19:27:01.441675  782266 command_runner.go:130] > # containers. These are overridden if set in the
	I0920 19:27:01.441682  782266 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0920 19:27:01.441690  782266 command_runner.go:130] > # default_env = [
	I0920 19:27:01.441700  782266 command_runner.go:130] > # ]
	I0920 19:27:01.441711  782266 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0920 19:27:01.441725  782266 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0920 19:27:01.441734  782266 command_runner.go:130] > # selinux = false
	I0920 19:27:01.441746  782266 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0920 19:27:01.441756  782266 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0920 19:27:01.441765  782266 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0920 19:27:01.441775  782266 command_runner.go:130] > # seccomp_profile = ""
	I0920 19:27:01.441788  782266 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0920 19:27:01.441801  782266 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0920 19:27:01.441813  782266 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0920 19:27:01.441823  782266 command_runner.go:130] > # which might increase security.
	I0920 19:27:01.441833  782266 command_runner.go:130] > # This option is currently deprecated,
	I0920 19:27:01.441845  782266 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0920 19:27:01.441855  782266 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0920 19:27:01.441868  782266 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0920 19:27:01.441883  782266 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0920 19:27:01.441899  782266 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0920 19:27:01.441912  782266 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0920 19:27:01.441922  782266 command_runner.go:130] > # This option supports live configuration reload.
	I0920 19:27:01.441930  782266 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0920 19:27:01.441938  782266 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0920 19:27:01.441949  782266 command_runner.go:130] > # the cgroup blockio controller.
	I0920 19:27:01.441959  782266 command_runner.go:130] > # blockio_config_file = ""
	I0920 19:27:01.441969  782266 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0920 19:27:01.441978  782266 command_runner.go:130] > # blockio parameters.
	I0920 19:27:01.441987  782266 command_runner.go:130] > # blockio_reload = false
	I0920 19:27:01.441999  782266 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0920 19:27:01.442008  782266 command_runner.go:130] > # irqbalance daemon.
	I0920 19:27:01.442015  782266 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0920 19:27:01.442024  782266 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0920 19:27:01.442037  782266 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0920 19:27:01.442050  782266 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0920 19:27:01.442062  782266 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0920 19:27:01.442074  782266 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0920 19:27:01.442085  782266 command_runner.go:130] > # This option supports live configuration reload.
	I0920 19:27:01.442094  782266 command_runner.go:130] > # rdt_config_file = ""
	I0920 19:27:01.442102  782266 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0920 19:27:01.442107  782266 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0920 19:27:01.442168  782266 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0920 19:27:01.442183  782266 command_runner.go:130] > # separate_pull_cgroup = ""
	I0920 19:27:01.442189  782266 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0920 19:27:01.442205  782266 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0920 19:27:01.442215  782266 command_runner.go:130] > # will be added.
	I0920 19:27:01.442224  782266 command_runner.go:130] > # default_capabilities = [
	I0920 19:27:01.442233  782266 command_runner.go:130] > # 	"CHOWN",
	I0920 19:27:01.442242  782266 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0920 19:27:01.442250  782266 command_runner.go:130] > # 	"FSETID",
	I0920 19:27:01.442256  782266 command_runner.go:130] > # 	"FOWNER",
	I0920 19:27:01.442264  782266 command_runner.go:130] > # 	"SETGID",
	I0920 19:27:01.442269  782266 command_runner.go:130] > # 	"SETUID",
	I0920 19:27:01.442274  782266 command_runner.go:130] > # 	"SETPCAP",
	I0920 19:27:01.442279  782266 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0920 19:27:01.442286  782266 command_runner.go:130] > # 	"KILL",
	I0920 19:27:01.442294  782266 command_runner.go:130] > # ]
	I0920 19:27:01.442309  782266 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0920 19:27:01.442324  782266 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0920 19:27:01.442344  782266 command_runner.go:130] > # add_inheritable_capabilities = false
	I0920 19:27:01.442355  782266 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0920 19:27:01.442364  782266 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 19:27:01.442373  782266 command_runner.go:130] > default_sysctls = [
	I0920 19:27:01.442382  782266 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0920 19:27:01.442390  782266 command_runner.go:130] > ]
	I0920 19:27:01.442401  782266 command_runner.go:130] > # List of devices on the host that a
	I0920 19:27:01.442413  782266 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0920 19:27:01.442422  782266 command_runner.go:130] > # allowed_devices = [
	I0920 19:27:01.442431  782266 command_runner.go:130] > # 	"/dev/fuse",
	I0920 19:27:01.442437  782266 command_runner.go:130] > # ]
	I0920 19:27:01.442442  782266 command_runner.go:130] > # List of additional devices. specified as
	I0920 19:27:01.442454  782266 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0920 19:27:01.442466  782266 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0920 19:27:01.442477  782266 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 19:27:01.442487  782266 command_runner.go:130] > # additional_devices = [
	I0920 19:27:01.442495  782266 command_runner.go:130] > # ]
	I0920 19:27:01.442505  782266 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0920 19:27:01.442520  782266 command_runner.go:130] > # cdi_spec_dirs = [
	I0920 19:27:01.442527  782266 command_runner.go:130] > # 	"/etc/cdi",
	I0920 19:27:01.442531  782266 command_runner.go:130] > # 	"/var/run/cdi",
	I0920 19:27:01.442538  782266 command_runner.go:130] > # ]
	I0920 19:27:01.442548  782266 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0920 19:27:01.442561  782266 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0920 19:27:01.442570  782266 command_runner.go:130] > # Defaults to false.
	I0920 19:27:01.442581  782266 command_runner.go:130] > # device_ownership_from_security_context = false
	I0920 19:27:01.442593  782266 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0920 19:27:01.442605  782266 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0920 19:27:01.442611  782266 command_runner.go:130] > # hooks_dir = [
	I0920 19:27:01.442616  782266 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0920 19:27:01.442620  782266 command_runner.go:130] > # ]
	I0920 19:27:01.442633  782266 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0920 19:27:01.442648  782266 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0920 19:27:01.442659  782266 command_runner.go:130] > # its default mounts from the following two files:
	I0920 19:27:01.442667  782266 command_runner.go:130] > #
	I0920 19:27:01.442678  782266 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0920 19:27:01.442692  782266 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0920 19:27:01.442700  782266 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0920 19:27:01.442703  782266 command_runner.go:130] > #
	I0920 19:27:01.442713  782266 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0920 19:27:01.442726  782266 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0920 19:27:01.442739  782266 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0920 19:27:01.442753  782266 command_runner.go:130] > #      only add mounts it finds in this file.
	I0920 19:27:01.442761  782266 command_runner.go:130] > #
	I0920 19:27:01.442768  782266 command_runner.go:130] > # default_mounts_file = ""
	I0920 19:27:01.442778  782266 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0920 19:27:01.442786  782266 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0920 19:27:01.442792  782266 command_runner.go:130] > pids_limit = 1024
	I0920 19:27:01.442805  782266 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0920 19:27:01.442817  782266 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0920 19:27:01.442830  782266 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0920 19:27:01.442870  782266 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0920 19:27:01.442880  782266 command_runner.go:130] > # log_size_max = -1
	I0920 19:27:01.442891  782266 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0920 19:27:01.442900  782266 command_runner.go:130] > # log_to_journald = false
	I0920 19:27:01.442912  782266 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0920 19:27:01.442919  782266 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0920 19:27:01.442925  782266 command_runner.go:130] > # Path to directory for container attach sockets.
	I0920 19:27:01.442934  782266 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0920 19:27:01.442945  782266 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0920 19:27:01.442955  782266 command_runner.go:130] > # bind_mount_prefix = ""
	I0920 19:27:01.442966  782266 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0920 19:27:01.442980  782266 command_runner.go:130] > # read_only = false
	I0920 19:27:01.442992  782266 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0920 19:27:01.443003  782266 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0920 19:27:01.443009  782266 command_runner.go:130] > # live configuration reload.
	I0920 19:27:01.443015  782266 command_runner.go:130] > # log_level = "info"
	I0920 19:27:01.443027  782266 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0920 19:27:01.443039  782266 command_runner.go:130] > # This option supports live configuration reload.
	I0920 19:27:01.443048  782266 command_runner.go:130] > # log_filter = ""
	I0920 19:27:01.443057  782266 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0920 19:27:01.443071  782266 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0920 19:27:01.443080  782266 command_runner.go:130] > # separated by comma.
	I0920 19:27:01.443091  782266 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 19:27:01.443097  782266 command_runner.go:130] > # uid_mappings = ""
	I0920 19:27:01.443106  782266 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0920 19:27:01.443119  782266 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0920 19:27:01.443128  782266 command_runner.go:130] > # separated by comma.
	I0920 19:27:01.443144  782266 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 19:27:01.443156  782266 command_runner.go:130] > # gid_mappings = ""
	I0920 19:27:01.443169  782266 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0920 19:27:01.443178  782266 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 19:27:01.443189  782266 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 19:27:01.443204  782266 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 19:27:01.443229  782266 command_runner.go:130] > # minimum_mappable_uid = -1
	I0920 19:27:01.443242  782266 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0920 19:27:01.443254  782266 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 19:27:01.443262  782266 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 19:27:01.443274  782266 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 19:27:01.443285  782266 command_runner.go:130] > # minimum_mappable_gid = -1
	I0920 19:27:01.443297  782266 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0920 19:27:01.443309  782266 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0920 19:27:01.443320  782266 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0920 19:27:01.443330  782266 command_runner.go:130] > # ctr_stop_timeout = 30
	I0920 19:27:01.443341  782266 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0920 19:27:01.443349  782266 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0920 19:27:01.443359  782266 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0920 19:27:01.443370  782266 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0920 19:27:01.443379  782266 command_runner.go:130] > drop_infra_ctr = false
	I0920 19:27:01.443390  782266 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0920 19:27:01.443401  782266 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0920 19:27:01.443414  782266 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0920 19:27:01.443423  782266 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0920 19:27:01.443433  782266 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0920 19:27:01.443444  782266 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0920 19:27:01.443455  782266 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0920 19:27:01.443466  782266 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0920 19:27:01.443475  782266 command_runner.go:130] > # shared_cpuset = ""
	I0920 19:27:01.443485  782266 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0920 19:27:01.443496  782266 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0920 19:27:01.443505  782266 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0920 19:27:01.443516  782266 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0920 19:27:01.443522  782266 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0920 19:27:01.443530  782266 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0920 19:27:01.443546  782266 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0920 19:27:01.443555  782266 command_runner.go:130] > # enable_criu_support = false
	I0920 19:27:01.443566  782266 command_runner.go:130] > # Enable/disable the generation of the container,
	I0920 19:27:01.443579  782266 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0920 19:27:01.443589  782266 command_runner.go:130] > # enable_pod_events = false
	I0920 19:27:01.443600  782266 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 19:27:01.443608  782266 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 19:27:01.443616  782266 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0920 19:27:01.443626  782266 command_runner.go:130] > # default_runtime = "runc"
	I0920 19:27:01.443637  782266 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0920 19:27:01.443652  782266 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0920 19:27:01.443669  782266 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0920 19:27:01.443680  782266 command_runner.go:130] > # creation as a file is not desired either.
	I0920 19:27:01.443691  782266 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0920 19:27:01.443700  782266 command_runner.go:130] > # the hostname is being managed dynamically.
	I0920 19:27:01.443711  782266 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0920 19:27:01.443720  782266 command_runner.go:130] > # ]
	I0920 19:27:01.443730  782266 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0920 19:27:01.443742  782266 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0920 19:27:01.443755  782266 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0920 19:27:01.443765  782266 command_runner.go:130] > # Each entry in the table should follow the format:
	I0920 19:27:01.443771  782266 command_runner.go:130] > #
	I0920 19:27:01.443776  782266 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0920 19:27:01.443785  782266 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0920 19:27:01.443816  782266 command_runner.go:130] > # runtime_type = "oci"
	I0920 19:27:01.443826  782266 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0920 19:27:01.443838  782266 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0920 19:27:01.443848  782266 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0920 19:27:01.443856  782266 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0920 19:27:01.443861  782266 command_runner.go:130] > # monitor_env = []
	I0920 19:27:01.443868  782266 command_runner.go:130] > # privileged_without_host_devices = false
	I0920 19:27:01.443877  782266 command_runner.go:130] > # allowed_annotations = []
	I0920 19:27:01.443889  782266 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0920 19:27:01.443897  782266 command_runner.go:130] > # Where:
	I0920 19:27:01.443906  782266 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0920 19:27:01.443918  782266 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0920 19:27:01.443932  782266 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0920 19:27:01.443943  782266 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0920 19:27:01.443953  782266 command_runner.go:130] > #   in $PATH.
	I0920 19:27:01.443968  782266 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0920 19:27:01.443979  782266 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0920 19:27:01.443992  782266 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0920 19:27:01.444001  782266 command_runner.go:130] > #   state.
	I0920 19:27:01.444011  782266 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0920 19:27:01.444023  782266 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0920 19:27:01.444031  782266 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0920 19:27:01.444042  782266 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0920 19:27:01.444055  782266 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0920 19:27:01.444069  782266 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0920 19:27:01.444078  782266 command_runner.go:130] > #   The currently recognized values are:
	I0920 19:27:01.444091  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0920 19:27:01.444105  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0920 19:27:01.444114  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0920 19:27:01.444121  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0920 19:27:01.444134  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0920 19:27:01.444147  782266 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0920 19:27:01.444159  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0920 19:27:01.444172  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0920 19:27:01.444184  782266 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0920 19:27:01.444195  782266 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0920 19:27:01.444202  782266 command_runner.go:130] > #   deprecated option "conmon".
	I0920 19:27:01.444212  782266 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0920 19:27:01.444223  782266 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0920 19:27:01.444239  782266 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0920 19:27:01.444250  782266 command_runner.go:130] > #   should be moved to the container's cgroup
	I0920 19:27:01.444263  782266 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0920 19:27:01.444273  782266 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0920 19:27:01.444283  782266 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0920 19:27:01.444292  782266 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0920 19:27:01.444301  782266 command_runner.go:130] > #
	I0920 19:27:01.444312  782266 command_runner.go:130] > # Using the seccomp notifier feature:
	I0920 19:27:01.444322  782266 command_runner.go:130] > #
	I0920 19:27:01.444334  782266 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0920 19:27:01.444346  782266 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0920 19:27:01.444354  782266 command_runner.go:130] > #
	I0920 19:27:01.444363  782266 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0920 19:27:01.444372  782266 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0920 19:27:01.444377  782266 command_runner.go:130] > #
	I0920 19:27:01.444388  782266 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0920 19:27:01.444397  782266 command_runner.go:130] > # feature.
	I0920 19:27:01.444404  782266 command_runner.go:130] > #
	I0920 19:27:01.444414  782266 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0920 19:27:01.444426  782266 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0920 19:27:01.444438  782266 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0920 19:27:01.444450  782266 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0920 19:27:01.444458  782266 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0920 19:27:01.444465  782266 command_runner.go:130] > #
	I0920 19:27:01.444478  782266 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0920 19:27:01.444490  782266 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0920 19:27:01.444497  782266 command_runner.go:130] > #
	I0920 19:27:01.444507  782266 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0920 19:27:01.444519  782266 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0920 19:27:01.444526  782266 command_runner.go:130] > #
	I0920 19:27:01.444536  782266 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0920 19:27:01.444544  782266 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0920 19:27:01.444552  782266 command_runner.go:130] > # limitation.
	I0920 19:27:01.444563  782266 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0920 19:27:01.444574  782266 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0920 19:27:01.444582  782266 command_runner.go:130] > runtime_type = "oci"
	I0920 19:27:01.444592  782266 command_runner.go:130] > runtime_root = "/run/runc"
	I0920 19:27:01.444601  782266 command_runner.go:130] > runtime_config_path = ""
	I0920 19:27:01.444609  782266 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0920 19:27:01.444622  782266 command_runner.go:130] > monitor_cgroup = "pod"
	I0920 19:27:01.444628  782266 command_runner.go:130] > monitor_exec_cgroup = ""
	I0920 19:27:01.444634  782266 command_runner.go:130] > monitor_env = [
	I0920 19:27:01.444646  782266 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 19:27:01.444654  782266 command_runner.go:130] > ]
	I0920 19:27:01.444664  782266 command_runner.go:130] > privileged_without_host_devices = false
	I0920 19:27:01.444676  782266 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0920 19:27:01.444687  782266 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0920 19:27:01.444698  782266 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0920 19:27:01.444709  782266 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0920 19:27:01.444726  782266 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0920 19:27:01.444739  782266 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0920 19:27:01.444755  782266 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0920 19:27:01.444771  782266 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0920 19:27:01.444782  782266 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0920 19:27:01.444794  782266 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0920 19:27:01.444800  782266 command_runner.go:130] > # Example:
	I0920 19:27:01.444807  782266 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0920 19:27:01.444818  782266 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0920 19:27:01.444830  782266 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0920 19:27:01.444844  782266 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0920 19:27:01.444852  782266 command_runner.go:130] > # cpuset = 0
	I0920 19:27:01.444858  782266 command_runner.go:130] > # cpushares = "0-1"
	I0920 19:27:01.444866  782266 command_runner.go:130] > # Where:
	I0920 19:27:01.444875  782266 command_runner.go:130] > # The workload name is workload-type.
	I0920 19:27:01.444884  782266 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0920 19:27:01.444891  782266 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0920 19:27:01.444897  782266 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0920 19:27:01.444906  782266 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0920 19:27:01.444917  782266 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0920 19:27:01.444929  782266 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0920 19:27:01.444942  782266 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0920 19:27:01.444952  782266 command_runner.go:130] > # Default value is set to true
	I0920 19:27:01.444962  782266 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0920 19:27:01.444973  782266 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0920 19:27:01.444983  782266 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0920 19:27:01.444989  782266 command_runner.go:130] > # Default value is set to 'false'
	I0920 19:27:01.444993  782266 command_runner.go:130] > # disable_hostport_mapping = false
	I0920 19:27:01.445000  782266 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0920 19:27:01.445002  782266 command_runner.go:130] > #
	I0920 19:27:01.445008  782266 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0920 19:27:01.445013  782266 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0920 19:27:01.445020  782266 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0920 19:27:01.445026  782266 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0920 19:27:01.445033  782266 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0920 19:27:01.445037  782266 command_runner.go:130] > [crio.image]
	I0920 19:27:01.445042  782266 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0920 19:27:01.445046  782266 command_runner.go:130] > # default_transport = "docker://"
	I0920 19:27:01.445051  782266 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0920 19:27:01.445060  782266 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0920 19:27:01.445068  782266 command_runner.go:130] > # global_auth_file = ""
	I0920 19:27:01.445077  782266 command_runner.go:130] > # The image used to instantiate infra containers.
	I0920 19:27:01.445085  782266 command_runner.go:130] > # This option supports live configuration reload.
	I0920 19:27:01.445092  782266 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0920 19:27:01.445102  782266 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0920 19:27:01.445110  782266 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0920 19:27:01.445118  782266 command_runner.go:130] > # This option supports live configuration reload.
	I0920 19:27:01.445123  782266 command_runner.go:130] > # pause_image_auth_file = ""
	I0920 19:27:01.445131  782266 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0920 19:27:01.445137  782266 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0920 19:27:01.445143  782266 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0920 19:27:01.445148  782266 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0920 19:27:01.445152  782266 command_runner.go:130] > # pause_command = "/pause"
	I0920 19:27:01.445158  782266 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0920 19:27:01.445163  782266 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0920 19:27:01.445168  782266 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0920 19:27:01.445177  782266 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0920 19:27:01.445184  782266 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0920 19:27:01.445190  782266 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0920 19:27:01.445193  782266 command_runner.go:130] > # pinned_images = [
	I0920 19:27:01.445196  782266 command_runner.go:130] > # ]
	I0920 19:27:01.445202  782266 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0920 19:27:01.445211  782266 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0920 19:27:01.445217  782266 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0920 19:27:01.445224  782266 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0920 19:27:01.445230  782266 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0920 19:27:01.445236  782266 command_runner.go:130] > # signature_policy = ""
	I0920 19:27:01.445242  782266 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0920 19:27:01.445250  782266 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0920 19:27:01.445256  782266 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0920 19:27:01.445267  782266 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0920 19:27:01.445274  782266 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0920 19:27:01.445279  782266 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0920 19:27:01.445291  782266 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0920 19:27:01.445303  782266 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0920 19:27:01.445310  782266 command_runner.go:130] > # changing them here.
	I0920 19:27:01.445314  782266 command_runner.go:130] > # insecure_registries = [
	I0920 19:27:01.445319  782266 command_runner.go:130] > # ]
	I0920 19:27:01.445325  782266 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0920 19:27:01.445333  782266 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0920 19:27:01.445337  782266 command_runner.go:130] > # image_volumes = "mkdir"
	I0920 19:27:01.445344  782266 command_runner.go:130] > # Temporary directory to use for storing big files
	I0920 19:27:01.445348  782266 command_runner.go:130] > # big_files_temporary_dir = ""
	I0920 19:27:01.445356  782266 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0920 19:27:01.445362  782266 command_runner.go:130] > # CNI plugins.
	I0920 19:27:01.445366  782266 command_runner.go:130] > [crio.network]
	I0920 19:27:01.445374  782266 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0920 19:27:01.445381  782266 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0920 19:27:01.445385  782266 command_runner.go:130] > # cni_default_network = ""
	I0920 19:27:01.445396  782266 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0920 19:27:01.445402  782266 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0920 19:27:01.445408  782266 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0920 19:27:01.445414  782266 command_runner.go:130] > # plugin_dirs = [
	I0920 19:27:01.445418  782266 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0920 19:27:01.445424  782266 command_runner.go:130] > # ]
	I0920 19:27:01.445430  782266 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0920 19:27:01.445435  782266 command_runner.go:130] > [crio.metrics]
	I0920 19:27:01.445450  782266 command_runner.go:130] > # Globally enable or disable metrics support.
	I0920 19:27:01.445456  782266 command_runner.go:130] > enable_metrics = true
	I0920 19:27:01.445461  782266 command_runner.go:130] > # Specify enabled metrics collectors.
	I0920 19:27:01.445470  782266 command_runner.go:130] > # Per default all metrics are enabled.
	I0920 19:27:01.445476  782266 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0920 19:27:01.445484  782266 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0920 19:27:01.445492  782266 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0920 19:27:01.445496  782266 command_runner.go:130] > # metrics_collectors = [
	I0920 19:27:01.445502  782266 command_runner.go:130] > # 	"operations",
	I0920 19:27:01.445507  782266 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0920 19:27:01.445513  782266 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0920 19:27:01.445517  782266 command_runner.go:130] > # 	"operations_errors",
	I0920 19:27:01.445523  782266 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0920 19:27:01.445527  782266 command_runner.go:130] > # 	"image_pulls_by_name",
	I0920 19:27:01.445533  782266 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0920 19:27:01.445541  782266 command_runner.go:130] > # 	"image_pulls_failures",
	I0920 19:27:01.445547  782266 command_runner.go:130] > # 	"image_pulls_successes",
	I0920 19:27:01.445552  782266 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0920 19:27:01.445557  782266 command_runner.go:130] > # 	"image_layer_reuse",
	I0920 19:27:01.445562  782266 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0920 19:27:01.445568  782266 command_runner.go:130] > # 	"containers_oom_total",
	I0920 19:27:01.445572  782266 command_runner.go:130] > # 	"containers_oom",
	I0920 19:27:01.445578  782266 command_runner.go:130] > # 	"processes_defunct",
	I0920 19:27:01.445582  782266 command_runner.go:130] > # 	"operations_total",
	I0920 19:27:01.445590  782266 command_runner.go:130] > # 	"operations_latency_seconds",
	I0920 19:27:01.445596  782266 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0920 19:27:01.445602  782266 command_runner.go:130] > # 	"operations_errors_total",
	I0920 19:27:01.445607  782266 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0920 19:27:01.445611  782266 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0920 19:27:01.445616  782266 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0920 19:27:01.445620  782266 command_runner.go:130] > # 	"image_pulls_success_total",
	I0920 19:27:01.445626  782266 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0920 19:27:01.445631  782266 command_runner.go:130] > # 	"containers_oom_count_total",
	I0920 19:27:01.445637  782266 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0920 19:27:01.445642  782266 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0920 19:27:01.445647  782266 command_runner.go:130] > # ]
	I0920 19:27:01.445652  782266 command_runner.go:130] > # The port on which the metrics server will listen.
	I0920 19:27:01.445658  782266 command_runner.go:130] > # metrics_port = 9090
	I0920 19:27:01.445663  782266 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0920 19:27:01.445671  782266 command_runner.go:130] > # metrics_socket = ""
	I0920 19:27:01.445681  782266 command_runner.go:130] > # The certificate for the secure metrics server.
	I0920 19:27:01.445693  782266 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0920 19:27:01.445702  782266 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0920 19:27:01.445709  782266 command_runner.go:130] > # certificate on any modification event.
	I0920 19:27:01.445713  782266 command_runner.go:130] > # metrics_cert = ""
	I0920 19:27:01.445720  782266 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0920 19:27:01.445725  782266 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0920 19:27:01.445731  782266 command_runner.go:130] > # metrics_key = ""
	I0920 19:27:01.445737  782266 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0920 19:27:01.445743  782266 command_runner.go:130] > [crio.tracing]
	I0920 19:27:01.445749  782266 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0920 19:27:01.445755  782266 command_runner.go:130] > # enable_tracing = false
	I0920 19:27:01.445760  782266 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0920 19:27:01.445767  782266 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0920 19:27:01.445773  782266 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0920 19:27:01.445780  782266 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0920 19:27:01.445784  782266 command_runner.go:130] > # CRI-O NRI configuration.
	I0920 19:27:01.445789  782266 command_runner.go:130] > [crio.nri]
	I0920 19:27:01.445795  782266 command_runner.go:130] > # Globally enable or disable NRI.
	I0920 19:27:01.445801  782266 command_runner.go:130] > # enable_nri = false
	I0920 19:27:01.445808  782266 command_runner.go:130] > # NRI socket to listen on.
	I0920 19:27:01.445814  782266 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0920 19:27:01.445819  782266 command_runner.go:130] > # NRI plugin directory to use.
	I0920 19:27:01.445825  782266 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0920 19:27:01.445830  782266 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0920 19:27:01.445840  782266 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0920 19:27:01.445847  782266 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0920 19:27:01.445851  782266 command_runner.go:130] > # nri_disable_connections = false
	I0920 19:27:01.445858  782266 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0920 19:27:01.445862  782266 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0920 19:27:01.445867  782266 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0920 19:27:01.445874  782266 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0920 19:27:01.445881  782266 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0920 19:27:01.445887  782266 command_runner.go:130] > [crio.stats]
	I0920 19:27:01.445893  782266 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0920 19:27:01.445900  782266 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0920 19:27:01.445905  782266 command_runner.go:130] > # stats_collection_period = 0
	I0920 19:27:01.445987  782266 cni.go:84] Creating CNI manager for ""
	I0920 19:27:01.445999  782266 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 19:27:01.446009  782266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:27:01.446031  782266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-756894 NodeName:multinode-756894 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:27:01.446157  782266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-756894"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:27:01.446225  782266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:27:01.456797  782266 command_runner.go:130] > kubeadm
	I0920 19:27:01.456819  782266 command_runner.go:130] > kubectl
	I0920 19:27:01.456825  782266 command_runner.go:130] > kubelet
	I0920 19:27:01.456878  782266 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:27:01.456937  782266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:27:01.466503  782266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 19:27:01.483431  782266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:27:01.499900  782266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0920 19:27:01.516337  782266 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I0920 19:27:01.520227  782266 command_runner.go:130] > 192.168.39.168	control-plane.minikube.internal
	I0920 19:27:01.520302  782266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:27:01.659368  782266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:27:01.673938  782266 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894 for IP: 192.168.39.168
	I0920 19:27:01.673973  782266 certs.go:194] generating shared ca certs ...
	I0920 19:27:01.674002  782266 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:27:01.674214  782266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 19:27:01.674264  782266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 19:27:01.674276  782266 certs.go:256] generating profile certs ...
	I0920 19:27:01.674387  782266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/client.key
	I0920 19:27:01.674533  782266 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/apiserver.key.f88761e5
	I0920 19:27:01.674576  782266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/proxy-client.key
	I0920 19:27:01.674588  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 19:27:01.674610  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 19:27:01.674623  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 19:27:01.674638  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 19:27:01.674650  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 19:27:01.674664  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 19:27:01.674674  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 19:27:01.674687  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 19:27:01.674741  782266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 19:27:01.674771  782266 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 19:27:01.674781  782266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:27:01.674803  782266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:27:01.674825  782266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:27:01.674864  782266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 19:27:01.674911  782266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 19:27:01.674939  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 19:27:01.674952  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:27:01.674964  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 19:27:01.675602  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:27:01.700159  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:27:01.724109  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:27:01.747820  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:27:01.771488  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 19:27:01.796401  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:27:01.820944  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:27:01.844404  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:27:01.868179  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 19:27:01.891319  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:27:01.915784  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 19:27:01.941340  782266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:27:01.960152  782266 ssh_runner.go:195] Run: openssl version
	I0920 19:27:01.966223  782266 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0920 19:27:01.966294  782266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 19:27:01.979574  782266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 19:27:01.984071  782266 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 19:27:01.984242  782266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 19:27:01.984291  782266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 19:27:01.989937  782266 command_runner.go:130] > 3ec20f2e
	I0920 19:27:01.990020  782266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:27:02.000632  782266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:27:02.036176  782266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:27:02.040768  782266 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:27:02.040811  782266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:27:02.040864  782266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:27:02.046715  782266 command_runner.go:130] > b5213941
	I0920 19:27:02.046797  782266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:27:02.056796  782266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 19:27:02.068180  782266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 19:27:02.072678  782266 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 19:27:02.072716  782266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 19:27:02.072761  782266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 19:27:02.078283  782266 command_runner.go:130] > 51391683
	I0920 19:27:02.078442  782266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 19:27:02.090190  782266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:27:02.095529  782266 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:27:02.095567  782266 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0920 19:27:02.095577  782266 command_runner.go:130] > Device: 253,1	Inode: 3148840     Links: 1
	I0920 19:27:02.095596  782266 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 19:27:02.095609  782266 command_runner.go:130] > Access: 2024-09-20 19:20:12.559886041 +0000
	I0920 19:27:02.095621  782266 command_runner.go:130] > Modify: 2024-09-20 19:20:12.559886041 +0000
	I0920 19:27:02.095633  782266 command_runner.go:130] > Change: 2024-09-20 19:20:12.559886041 +0000
	I0920 19:27:02.095644  782266 command_runner.go:130] >  Birth: 2024-09-20 19:20:12.559886041 +0000
	I0920 19:27:02.095708  782266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:27:02.101387  782266 command_runner.go:130] > Certificate will not expire
	I0920 19:27:02.101745  782266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:27:02.107407  782266 command_runner.go:130] > Certificate will not expire
	I0920 19:27:02.107708  782266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:27:02.113969  782266 command_runner.go:130] > Certificate will not expire
	I0920 19:27:02.114138  782266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:27:02.119743  782266 command_runner.go:130] > Certificate will not expire
	I0920 19:27:02.119828  782266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:27:02.125383  782266 command_runner.go:130] > Certificate will not expire
	I0920 19:27:02.125437  782266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:27:02.130814  782266 command_runner.go:130] > Certificate will not expire
	I0920 19:27:02.130954  782266 kubeadm.go:392] StartCluster: {Name:multinode-756894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-756894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:27:02.131080  782266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:27:02.131136  782266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:27:02.165194  782266 command_runner.go:130] > 72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa
	I0920 19:27:02.165224  782266 command_runner.go:130] > 28022afd37d6d2f008deeded66b17c0a72113eb2ad5fa6907f1036e18f1d975f
	I0920 19:27:02.165231  782266 command_runner.go:130] > fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6
	I0920 19:27:02.165237  782266 command_runner.go:130] > 12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414
	I0920 19:27:02.165242  782266 command_runner.go:130] > 4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42
	I0920 19:27:02.165247  782266 command_runner.go:130] > fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8
	I0920 19:27:02.165252  782266 command_runner.go:130] > 9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21
	I0920 19:27:02.165258  782266 command_runner.go:130] > 23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b
	I0920 19:27:02.166670  782266 cri.go:89] found id: "72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa"
	I0920 19:27:02.166689  782266 cri.go:89] found id: "28022afd37d6d2f008deeded66b17c0a72113eb2ad5fa6907f1036e18f1d975f"
	I0920 19:27:02.166694  782266 cri.go:89] found id: "fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6"
	I0920 19:27:02.166698  782266 cri.go:89] found id: "12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414"
	I0920 19:27:02.166703  782266 cri.go:89] found id: "4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42"
	I0920 19:27:02.166715  782266 cri.go:89] found id: "fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8"
	I0920 19:27:02.166719  782266 cri.go:89] found id: "9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21"
	I0920 19:27:02.166723  782266 cri.go:89] found id: "23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b"
	I0920 19:27:02.166727  782266 cri.go:89] found id: ""
	I0920 19:27:02.166783  782266 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.675035449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860525675010032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ab6b7a8-311e-40e4-b244-183c0486fc46 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.675535279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fa407bf-efeb-4919-b27d-a2bea5f89da5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.675590576Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fa407bf-efeb-4919-b27d-a2bea5f89da5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.676893229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1032aae897c6a6e789586471840775b35b66bd755e2ea7221f6c2a7e9d01023f,PodSandboxId:78021bafe3d20c913f62fb213864b4120ec395f0f5f378ad7cbecfa5d6cc413b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726860462507047601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0475dc410c9bd2962c731c4869948eccbd23017afbe2bf565ebbd87fdeb2bf23,PodSandboxId:76d754085d774cda47005a0433633d32fe881e18fec2cb9bd995aef50fa7d786,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726860428957320778,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31c03c3121e36f8b680ffeb04be136d13ed0eb30cc4232945f16828840b7fcf,PodSandboxId:4e099d73b22682c0a5a5a1dde1626a039ae8d2d98ffc894094e53b18de416a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726860428727515994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b1326c5d456cb895967959e46bbd4ccd2743f8eebbf4ea2920d93996b086c5,PodSandboxId:38808a33eadabf60b3bf793daee35e8b017adba19989f83c8c108d030de0a94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726860428654175913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29-6f2c3fcc7e35,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60955aaf5949a780a0a25c5166eda54b8cfa0b6459fc2943c7962b3d0b2b479c,PodSandboxId:9f438bf46a92e7ffab993bc69d2ce1ab0a773e378252bf0e2ff344aa0a2c1065,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726860428685619092,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9bd3ed7c925e80838dbb28a891c683435a9d040feae22c3e18aa8e7273cc15,PodSandboxId:77bd787f41e6c6feee7f8596005b33a7314f2999fd11550c663b84f758a289d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726860424851395385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5e65f8c20a2386e254f6706107d0cc7f75f834b088e05d4f04adfc74984a60,PodSandboxId:5fd31bbab35fed3fb22d7232ee57a5e50103639d591865ca5cfe6e2c2cccef57,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726860424814308052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77a7e1a224a7e290b56fbe2966968e539e4d6246462134099cf5e363b83dffb,PodSandboxId:ac9232fb23f537a32cd6724d4f0aa7417b505d67a73444703a1614fd39c88ec8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726860424831160293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6c2d47645b3c320308300d42f92d21313904e5f30c2e0843aaa7c588014c301,PodSandboxId:ed88405164d64c4b8218fab6d9abcde896bfb41cb5c836aed409402e9a0f4b6e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726860424763351235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8dcd882059b0fe1be82143b183508ab50d5688d42f4fd8d917bda34237eba96,PodSandboxId:5a505c52e820cb170c44f822bb0f5662b81a1567befe1e0296e6e9c359b4a0cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726860095699891078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa,PodSandboxId:983db580543713ff6b79c9d54e5d3d700566e8c06206f15b4fd8278675f97229,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726860038958952580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28022afd37d6d2f008deeded66b17c0a72113eb2ad5fa6907f1036e18f1d975f,PodSandboxId:484bc97168c87403e6d624f33c30eae4932c92e413d6fc360ca0cde0661bbbe1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726860038935052890,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6,PodSandboxId:212c62c39933f2f28d2f41ea89dd26b2c4b4c9f9ceaa334e43d1d4c8fdabac3c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726860027205042203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414,PodSandboxId:2409beaba30e5726b8d9b5f1b0e2faac96a8251d15d45802ade6f092deab1ef7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726860026953549127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29
-6f2c3fcc7e35,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42,PodSandboxId:5664e1f96c4c0c218278727eeed301014e9b3a0687a3db76fc8af725e4555183,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726860015825323175,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21,PodSandboxId:04efda436d7240785cb8c0b48a1928dfbc3b03ffd216666b108f27cbfbadedb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726860015795461437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8,PodSandboxId:96e2d4cf2c1fbe2fc2b1bd48e31218fd3dabf1869233ade215a1dea2d0c9b7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726860015822873372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b,PodSandboxId:d22dc1cbb45f98df2dd239baaa1834cf1d79630d7011a5acc083024c76958a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726860015790876301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fa407bf-efeb-4919-b27d-a2bea5f89da5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.725860803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f63e7dc5-0697-46f8-b5f1-63cff8dd775d name=/runtime.v1.RuntimeService/Version
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.725936612Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f63e7dc5-0697-46f8-b5f1-63cff8dd775d name=/runtime.v1.RuntimeService/Version
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.727412608Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4180b15d-96f1-4a54-9166-f4ee96cc4418 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.727947161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860525727920304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4180b15d-96f1-4a54-9166-f4ee96cc4418 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.728459160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75a91215-172c-4680-bd51-422f3f57f4ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.728516224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75a91215-172c-4680-bd51-422f3f57f4ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.728941772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1032aae897c6a6e789586471840775b35b66bd755e2ea7221f6c2a7e9d01023f,PodSandboxId:78021bafe3d20c913f62fb213864b4120ec395f0f5f378ad7cbecfa5d6cc413b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726860462507047601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0475dc410c9bd2962c731c4869948eccbd23017afbe2bf565ebbd87fdeb2bf23,PodSandboxId:76d754085d774cda47005a0433633d32fe881e18fec2cb9bd995aef50fa7d786,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726860428957320778,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31c03c3121e36f8b680ffeb04be136d13ed0eb30cc4232945f16828840b7fcf,PodSandboxId:4e099d73b22682c0a5a5a1dde1626a039ae8d2d98ffc894094e53b18de416a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726860428727515994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b1326c5d456cb895967959e46bbd4ccd2743f8eebbf4ea2920d93996b086c5,PodSandboxId:38808a33eadabf60b3bf793daee35e8b017adba19989f83c8c108d030de0a94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726860428654175913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29-6f2c3fcc7e35,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60955aaf5949a780a0a25c5166eda54b8cfa0b6459fc2943c7962b3d0b2b479c,PodSandboxId:9f438bf46a92e7ffab993bc69d2ce1ab0a773e378252bf0e2ff344aa0a2c1065,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726860428685619092,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9bd3ed7c925e80838dbb28a891c683435a9d040feae22c3e18aa8e7273cc15,PodSandboxId:77bd787f41e6c6feee7f8596005b33a7314f2999fd11550c663b84f758a289d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726860424851395385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5e65f8c20a2386e254f6706107d0cc7f75f834b088e05d4f04adfc74984a60,PodSandboxId:5fd31bbab35fed3fb22d7232ee57a5e50103639d591865ca5cfe6e2c2cccef57,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726860424814308052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77a7e1a224a7e290b56fbe2966968e539e4d6246462134099cf5e363b83dffb,PodSandboxId:ac9232fb23f537a32cd6724d4f0aa7417b505d67a73444703a1614fd39c88ec8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726860424831160293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6c2d47645b3c320308300d42f92d21313904e5f30c2e0843aaa7c588014c301,PodSandboxId:ed88405164d64c4b8218fab6d9abcde896bfb41cb5c836aed409402e9a0f4b6e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726860424763351235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8dcd882059b0fe1be82143b183508ab50d5688d42f4fd8d917bda34237eba96,PodSandboxId:5a505c52e820cb170c44f822bb0f5662b81a1567befe1e0296e6e9c359b4a0cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726860095699891078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa,PodSandboxId:983db580543713ff6b79c9d54e5d3d700566e8c06206f15b4fd8278675f97229,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726860038958952580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28022afd37d6d2f008deeded66b17c0a72113eb2ad5fa6907f1036e18f1d975f,PodSandboxId:484bc97168c87403e6d624f33c30eae4932c92e413d6fc360ca0cde0661bbbe1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726860038935052890,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6,PodSandboxId:212c62c39933f2f28d2f41ea89dd26b2c4b4c9f9ceaa334e43d1d4c8fdabac3c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726860027205042203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414,PodSandboxId:2409beaba30e5726b8d9b5f1b0e2faac96a8251d15d45802ade6f092deab1ef7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726860026953549127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29
-6f2c3fcc7e35,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42,PodSandboxId:5664e1f96c4c0c218278727eeed301014e9b3a0687a3db76fc8af725e4555183,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726860015825323175,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21,PodSandboxId:04efda436d7240785cb8c0b48a1928dfbc3b03ffd216666b108f27cbfbadedb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726860015795461437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8,PodSandboxId:96e2d4cf2c1fbe2fc2b1bd48e31218fd3dabf1869233ade215a1dea2d0c9b7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726860015822873372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b,PodSandboxId:d22dc1cbb45f98df2dd239baaa1834cf1d79630d7011a5acc083024c76958a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726860015790876301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75a91215-172c-4680-bd51-422f3f57f4ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.771054893Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a7a1b9d-b8a8-4753-9e72-a1b7715338c3 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.771170004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a7a1b9d-b8a8-4753-9e72-a1b7715338c3 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.772509353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f91b0941-7d22-4c0b-84e7-5cbeaee88bef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.772998630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860525772972901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f91b0941-7d22-4c0b-84e7-5cbeaee88bef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.773555311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a707bef7-afbf-49f1-a327-b65dbbc15eb0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.773615253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a707bef7-afbf-49f1-a327-b65dbbc15eb0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.774054092Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1032aae897c6a6e789586471840775b35b66bd755e2ea7221f6c2a7e9d01023f,PodSandboxId:78021bafe3d20c913f62fb213864b4120ec395f0f5f378ad7cbecfa5d6cc413b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726860462507047601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0475dc410c9bd2962c731c4869948eccbd23017afbe2bf565ebbd87fdeb2bf23,PodSandboxId:76d754085d774cda47005a0433633d32fe881e18fec2cb9bd995aef50fa7d786,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726860428957320778,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31c03c3121e36f8b680ffeb04be136d13ed0eb30cc4232945f16828840b7fcf,PodSandboxId:4e099d73b22682c0a5a5a1dde1626a039ae8d2d98ffc894094e53b18de416a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726860428727515994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b1326c5d456cb895967959e46bbd4ccd2743f8eebbf4ea2920d93996b086c5,PodSandboxId:38808a33eadabf60b3bf793daee35e8b017adba19989f83c8c108d030de0a94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726860428654175913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29-6f2c3fcc7e35,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60955aaf5949a780a0a25c5166eda54b8cfa0b6459fc2943c7962b3d0b2b479c,PodSandboxId:9f438bf46a92e7ffab993bc69d2ce1ab0a773e378252bf0e2ff344aa0a2c1065,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726860428685619092,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9bd3ed7c925e80838dbb28a891c683435a9d040feae22c3e18aa8e7273cc15,PodSandboxId:77bd787f41e6c6feee7f8596005b33a7314f2999fd11550c663b84f758a289d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726860424851395385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5e65f8c20a2386e254f6706107d0cc7f75f834b088e05d4f04adfc74984a60,PodSandboxId:5fd31bbab35fed3fb22d7232ee57a5e50103639d591865ca5cfe6e2c2cccef57,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726860424814308052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77a7e1a224a7e290b56fbe2966968e539e4d6246462134099cf5e363b83dffb,PodSandboxId:ac9232fb23f537a32cd6724d4f0aa7417b505d67a73444703a1614fd39c88ec8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726860424831160293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6c2d47645b3c320308300d42f92d21313904e5f30c2e0843aaa7c588014c301,PodSandboxId:ed88405164d64c4b8218fab6d9abcde896bfb41cb5c836aed409402e9a0f4b6e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726860424763351235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8dcd882059b0fe1be82143b183508ab50d5688d42f4fd8d917bda34237eba96,PodSandboxId:5a505c52e820cb170c44f822bb0f5662b81a1567befe1e0296e6e9c359b4a0cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726860095699891078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa,PodSandboxId:983db580543713ff6b79c9d54e5d3d700566e8c06206f15b4fd8278675f97229,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726860038958952580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28022afd37d6d2f008deeded66b17c0a72113eb2ad5fa6907f1036e18f1d975f,PodSandboxId:484bc97168c87403e6d624f33c30eae4932c92e413d6fc360ca0cde0661bbbe1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726860038935052890,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6,PodSandboxId:212c62c39933f2f28d2f41ea89dd26b2c4b4c9f9ceaa334e43d1d4c8fdabac3c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726860027205042203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414,PodSandboxId:2409beaba30e5726b8d9b5f1b0e2faac96a8251d15d45802ade6f092deab1ef7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726860026953549127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29
-6f2c3fcc7e35,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42,PodSandboxId:5664e1f96c4c0c218278727eeed301014e9b3a0687a3db76fc8af725e4555183,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726860015825323175,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21,PodSandboxId:04efda436d7240785cb8c0b48a1928dfbc3b03ffd216666b108f27cbfbadedb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726860015795461437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8,PodSandboxId:96e2d4cf2c1fbe2fc2b1bd48e31218fd3dabf1869233ade215a1dea2d0c9b7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726860015822873372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b,PodSandboxId:d22dc1cbb45f98df2dd239baaa1834cf1d79630d7011a5acc083024c76958a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726860015790876301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a707bef7-afbf-49f1-a327-b65dbbc15eb0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.820092641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4940a5dd-3e16-4b00-8555-58477a5ca457 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.820164213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4940a5dd-3e16-4b00-8555-58477a5ca457 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.821167255Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72741e2e-efae-4d14-885b-64cedc687c6b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.821614847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860525821592765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72741e2e-efae-4d14-885b-64cedc687c6b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.822414742Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f399ef00-5200-4197-9c23-b6a4fe3557c3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.822468437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f399ef00-5200-4197-9c23-b6a4fe3557c3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:28:45 multinode-756894 crio[2698]: time="2024-09-20 19:28:45.822840417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1032aae897c6a6e789586471840775b35b66bd755e2ea7221f6c2a7e9d01023f,PodSandboxId:78021bafe3d20c913f62fb213864b4120ec395f0f5f378ad7cbecfa5d6cc413b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726860462507047601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0475dc410c9bd2962c731c4869948eccbd23017afbe2bf565ebbd87fdeb2bf23,PodSandboxId:76d754085d774cda47005a0433633d32fe881e18fec2cb9bd995aef50fa7d786,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726860428957320778,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31c03c3121e36f8b680ffeb04be136d13ed0eb30cc4232945f16828840b7fcf,PodSandboxId:4e099d73b22682c0a5a5a1dde1626a039ae8d2d98ffc894094e53b18de416a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726860428727515994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b1326c5d456cb895967959e46bbd4ccd2743f8eebbf4ea2920d93996b086c5,PodSandboxId:38808a33eadabf60b3bf793daee35e8b017adba19989f83c8c108d030de0a94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726860428654175913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29-6f2c3fcc7e35,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60955aaf5949a780a0a25c5166eda54b8cfa0b6459fc2943c7962b3d0b2b479c,PodSandboxId:9f438bf46a92e7ffab993bc69d2ce1ab0a773e378252bf0e2ff344aa0a2c1065,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726860428685619092,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9bd3ed7c925e80838dbb28a891c683435a9d040feae22c3e18aa8e7273cc15,PodSandboxId:77bd787f41e6c6feee7f8596005b33a7314f2999fd11550c663b84f758a289d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726860424851395385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5e65f8c20a2386e254f6706107d0cc7f75f834b088e05d4f04adfc74984a60,PodSandboxId:5fd31bbab35fed3fb22d7232ee57a5e50103639d591865ca5cfe6e2c2cccef57,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726860424814308052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77a7e1a224a7e290b56fbe2966968e539e4d6246462134099cf5e363b83dffb,PodSandboxId:ac9232fb23f537a32cd6724d4f0aa7417b505d67a73444703a1614fd39c88ec8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726860424831160293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6c2d47645b3c320308300d42f92d21313904e5f30c2e0843aaa7c588014c301,PodSandboxId:ed88405164d64c4b8218fab6d9abcde896bfb41cb5c836aed409402e9a0f4b6e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726860424763351235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8dcd882059b0fe1be82143b183508ab50d5688d42f4fd8d917bda34237eba96,PodSandboxId:5a505c52e820cb170c44f822bb0f5662b81a1567befe1e0296e6e9c359b4a0cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726860095699891078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa,PodSandboxId:983db580543713ff6b79c9d54e5d3d700566e8c06206f15b4fd8278675f97229,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726860038958952580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28022afd37d6d2f008deeded66b17c0a72113eb2ad5fa6907f1036e18f1d975f,PodSandboxId:484bc97168c87403e6d624f33c30eae4932c92e413d6fc360ca0cde0661bbbe1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726860038935052890,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6,PodSandboxId:212c62c39933f2f28d2f41ea89dd26b2c4b4c9f9ceaa334e43d1d4c8fdabac3c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726860027205042203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414,PodSandboxId:2409beaba30e5726b8d9b5f1b0e2faac96a8251d15d45802ade6f092deab1ef7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726860026953549127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29
-6f2c3fcc7e35,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42,PodSandboxId:5664e1f96c4c0c218278727eeed301014e9b3a0687a3db76fc8af725e4555183,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726860015825323175,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21,PodSandboxId:04efda436d7240785cb8c0b48a1928dfbc3b03ffd216666b108f27cbfbadedb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726860015795461437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8,PodSandboxId:96e2d4cf2c1fbe2fc2b1bd48e31218fd3dabf1869233ade215a1dea2d0c9b7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726860015822873372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b,PodSandboxId:d22dc1cbb45f98df2dd239baaa1834cf1d79630d7011a5acc083024c76958a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726860015790876301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f399ef00-5200-4197-9c23-b6a4fe3557c3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1032aae897c6a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   78021bafe3d20       busybox-7dff88458-kr8zb
	0475dc410c9bd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   76d754085d774       kindnet-2r822
	e31c03c3121e3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   4e099d73b2268       coredns-7c65d6cfc9-k7xq2
	60955aaf5949a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   9f438bf46a92e       storage-provisioner
	88b1326c5d456       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   38808a33eadab       kube-proxy-m5tkt
	ca9bd3ed7c925       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   77bd787f41e6c       kube-scheduler-multinode-756894
	c77a7e1a224a7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   ac9232fb23f53       kube-controller-manager-multinode-756894
	7b5e65f8c20a2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   5fd31bbab35fe       etcd-multinode-756894
	a6c2d47645b3c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   ed88405164d64       kube-apiserver-multinode-756894
	f8dcd882059b0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   5a505c52e820c       busybox-7dff88458-kr8zb
	72d4e37d4505a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   983db58054371       coredns-7c65d6cfc9-k7xq2
	28022afd37d6d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   484bc97168c87       storage-provisioner
	fccbad40f2b34       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   212c62c39933f       kindnet-2r822
	12fa8b93a3911       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   2409beaba30e5       kube-proxy-m5tkt
	4461a84038243       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   5664e1f96c4c0       etcd-multinode-756894
	fab8a49afdb38       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   96e2d4cf2c1fb       kube-apiserver-multinode-756894
	9e8c8df52527f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   04efda436d724       kube-scheduler-multinode-756894
	23a0fc48b5b4d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   d22dc1cbb45f9       kube-controller-manager-multinode-756894
	
	
	==> coredns [72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa] <==
	[INFO] 10.244.0.3:45793 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001584314s
	[INFO] 10.244.0.3:59518 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000055636s
	[INFO] 10.244.0.3:38343 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000047642s
	[INFO] 10.244.0.3:37834 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001115807s
	[INFO] 10.244.0.3:42888 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000057199s
	[INFO] 10.244.0.3:41157 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000047496s
	[INFO] 10.244.0.3:58352 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065473s
	[INFO] 10.244.1.2:51514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152904s
	[INFO] 10.244.1.2:59373 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162696s
	[INFO] 10.244.1.2:44401 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091841s
	[INFO] 10.244.1.2:37696 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087115s
	[INFO] 10.244.0.3:42996 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132227s
	[INFO] 10.244.0.3:37331 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008351s
	[INFO] 10.244.0.3:58970 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010357s
	[INFO] 10.244.0.3:48326 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135472s
	[INFO] 10.244.1.2:45935 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130523s
	[INFO] 10.244.1.2:49766 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000224752s
	[INFO] 10.244.1.2:35175 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146748s
	[INFO] 10.244.1.2:39378 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131962s
	[INFO] 10.244.0.3:35114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108425s
	[INFO] 10.244.0.3:36492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100334s
	[INFO] 10.244.0.3:57706 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059811s
	[INFO] 10.244.0.3:57975 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000596s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e31c03c3121e36f8b680ffeb04be136d13ed0eb30cc4232945f16828840b7fcf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53894 - 943 "HINFO IN 1506214172435802275.7599767001225360048. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031021286s
	
	
	==> describe nodes <==
	Name:               multinode-756894
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-756894
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=multinode-756894
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_20_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:20:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-756894
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:28:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:27:08 +0000   Fri, 20 Sep 2024 19:20:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:27:08 +0000   Fri, 20 Sep 2024 19:20:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:27:08 +0000   Fri, 20 Sep 2024 19:20:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:27:08 +0000   Fri, 20 Sep 2024 19:20:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    multinode-756894
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ddf615a403e94b3bbed6b8abde987c04
	  System UUID:                ddf615a4-03e9-4b3b-bed6-b8abde987c04
	  Boot ID:                    2a1e1ca6-0967-488d-bf89-1abcc6d05f87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kr8zb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m12s
	  kube-system                 coredns-7c65d6cfc9-k7xq2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m20s
	  kube-system                 etcd-multinode-756894                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m26s
	  kube-system                 kindnet-2r822                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m21s
	  kube-system                 kube-apiserver-multinode-756894             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-controller-manager-multinode-756894    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-proxy-m5tkt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 kube-scheduler-multinode-756894             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m18s                kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m26s                kubelet          Node multinode-756894 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m26s                kubelet          Node multinode-756894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m26s                kubelet          Node multinode-756894 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m26s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m21s                node-controller  Node multinode-756894 event: Registered Node multinode-756894 in Controller
	  Normal  NodeReady                8m8s                 kubelet          Node multinode-756894 status is now: NodeReady
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node multinode-756894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node multinode-756894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x7 over 102s)  kubelet          Node multinode-756894 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           95s                  node-controller  Node multinode-756894 event: Registered Node multinode-756894 in Controller
	
	
	Name:               multinode-756894-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-756894-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=multinode-756894
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T19_27_47_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:27:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-756894-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:28:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:28:17 +0000   Fri, 20 Sep 2024 19:27:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:28:17 +0000   Fri, 20 Sep 2024 19:27:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:28:17 +0000   Fri, 20 Sep 2024 19:27:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:28:17 +0000   Fri, 20 Sep 2024 19:28:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.204
	  Hostname:    multinode-756894-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 196b0cbe86e346c498b66aa6b18004c3
	  System UUID:                196b0cbe-86e3-46c4-98b6-6aa6b18004c3
	  Boot ID:                    7e230b42-61af-49bf-87bb-836299e7d24e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xbpkw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kindnet-zxd86              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m33s
	  kube-system                 kube-proxy-4m9vh           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  Starting                 7m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m33s (x2 over 7m33s)  kubelet          Node multinode-756894-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m33s (x2 over 7m33s)  kubelet          Node multinode-756894-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m33s (x2 over 7m33s)  kubelet          Node multinode-756894-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m14s                  kubelet          Node multinode-756894-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet          Node multinode-756894-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet          Node multinode-756894-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet          Node multinode-756894-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           55s                    node-controller  Node multinode-756894-m02 event: Registered Node multinode-756894-m02 in Controller
	  Normal  NodeReady                41s                    kubelet          Node multinode-756894-m02 status is now: NodeReady
	
	
	Name:               multinode-756894-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-756894-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=multinode-756894
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T19_28_25_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:28:24 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-756894-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:28:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:28:42 +0000   Fri, 20 Sep 2024 19:28:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:28:42 +0000   Fri, 20 Sep 2024 19:28:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:28:42 +0000   Fri, 20 Sep 2024 19:28:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:28:42 +0000   Fri, 20 Sep 2024 19:28:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    multinode-756894-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4964c1ad06954dbc98bef689fb35e547
	  System UUID:                4964c1ad-0695-4dbc-98be-f689fb35e547
	  Boot ID:                    ee891acf-6b42-4950-8f25-4086ce60c009
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kr8ph       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m35s
	  kube-system                 kube-proxy-djt5n    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m30s                  kube-proxy  
	  Normal  Starting                 16s                    kube-proxy  
	  Normal  Starting                 5m42s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m35s (x2 over 6m36s)  kubelet     Node multinode-756894-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x2 over 6m36s)  kubelet     Node multinode-756894-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x2 over 6m36s)  kubelet     Node multinode-756894-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m17s                  kubelet     Node multinode-756894-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m47s (x2 over 5m47s)  kubelet     Node multinode-756894-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s (x2 over 5m47s)  kubelet     Node multinode-756894-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m47s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m47s (x2 over 5m47s)  kubelet     Node multinode-756894-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m47s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m29s                  kubelet     Node multinode-756894-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-756894-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-756894-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-756894-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-756894-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.062780] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.180277] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.114991] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.269161] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.868815] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.400088] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.060559] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.983400] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.095278] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.707595] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.086888] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.398435] kauditd_printk_skb: 60 callbacks suppressed
	[Sep20 19:21] kauditd_printk_skb: 12 callbacks suppressed
	[Sep20 19:26] systemd-fstab-generator[2623]: Ignoring "noauto" option for root device
	[  +0.139557] systemd-fstab-generator[2635]: Ignoring "noauto" option for root device
	[  +0.183646] systemd-fstab-generator[2649]: Ignoring "noauto" option for root device
	[  +0.157658] systemd-fstab-generator[2661]: Ignoring "noauto" option for root device
	[  +0.279246] systemd-fstab-generator[2689]: Ignoring "noauto" option for root device
	[Sep20 19:27] systemd-fstab-generator[2783]: Ignoring "noauto" option for root device
	[  +0.080233] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.236230] systemd-fstab-generator[2905]: Ignoring "noauto" option for root device
	[  +4.690570] kauditd_printk_skb: 74 callbacks suppressed
	[ +13.611127] systemd-fstab-generator[3746]: Ignoring "noauto" option for root device
	[  +0.094517] kauditd_printk_skb: 34 callbacks suppressed
	[ +20.173547] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42] <==
	{"level":"warn","ts":"2024-09-20T19:22:11.963629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.709847ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T19:22:11.963672Z","caller":"traceutil/trace.go:171","msg":"trace[1618710968] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:585; }","duration":"413.750697ms","start":"2024-09-20T19:22:11.549913Z","end":"2024-09-20T19:22:11.963664Z","steps":["trace[1618710968] 'agreement among raft nodes before linearized reading'  (duration: 413.661995ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:22:11.963830Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:22:11.549888Z","time spent":"413.932122ms","remote":"127.0.0.1:54558","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges\" limit:1 "}
	{"level":"warn","ts":"2024-09-20T19:22:11.963356Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:22:11.547376Z","time spent":"415.938679ms","remote":"127.0.0.1:54592","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":42,"response count":0,"response size":2073,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-756894-m03\" mod_revision:583 > success:<request_put:<key:\"/registry/minions/multinode-756894-m03\" value_size:1974 >> failure:<request_range:<key:\"/registry/minions/multinode-756894-m03\" > >"}
	{"level":"warn","ts":"2024-09-20T19:22:11.964040Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.551819ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-09-20T19:22:11.964076Z","caller":"traceutil/trace.go:171","msg":"trace[232552228] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:585; }","duration":"353.588641ms","start":"2024-09-20T19:22:11.610481Z","end":"2024-09-20T19:22:11.964070Z","steps":["trace[232552228] 'agreement among raft nodes before linearized reading'  (duration: 353.455385ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:22:11.964119Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:22:11.610435Z","time spent":"353.678754ms","remote":"127.0.0.1:54588","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1140,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-09-20T19:22:12.073638Z","caller":"traceutil/trace.go:171","msg":"trace[1341622855] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"107.635451ms","start":"2024-09-20T19:22:11.965988Z","end":"2024-09-20T19:22:12.073623Z","steps":["trace[1341622855] 'process raft request'  (duration: 100.893522ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:22:12.073968Z","caller":"traceutil/trace.go:171","msg":"trace[282571320] transaction","detail":"{read_only:false; number_of_response:1; response_revision:586; }","duration":"103.08865ms","start":"2024-09-20T19:22:11.970870Z","end":"2024-09-20T19:22:12.073958Z","steps":["trace[282571320] 'process raft request'  (duration: 103.017745ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:22:12.074119Z","caller":"traceutil/trace.go:171","msg":"trace[1576313543] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"101.05498ms","start":"2024-09-20T19:22:11.973059Z","end":"2024-09-20T19:22:12.074114Z","steps":["trace[1576313543] 'process raft request'  (duration: 100.885914ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:22:12.074277Z","caller":"traceutil/trace.go:171","msg":"trace[821219588] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"101.294044ms","start":"2024-09-20T19:22:11.972978Z","end":"2024-09-20T19:22:12.074272Z","steps":["trace[821219588] 'process raft request'  (duration: 100.938334ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:22:12.074517Z","caller":"traceutil/trace.go:171","msg":"trace[1898443886] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"101.404358ms","start":"2024-09-20T19:22:11.973106Z","end":"2024-09-20T19:22:12.074511Z","steps":["trace[1898443886] 'process raft request'  (duration: 100.855462ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:22:12.073992Z","caller":"traceutil/trace.go:171","msg":"trace[2095581807] linearizableReadLoop","detail":"{readStateIndex:622; appliedIndex:620; }","duration":"102.90822ms","start":"2024-09-20T19:22:11.971076Z","end":"2024-09-20T19:22:12.073984Z","steps":["trace[2095581807] 'read index received'  (duration: 95.675567ms)","trace[2095581807] 'applied index is now lower than readState.Index'  (duration: 7.232261ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T19:22:12.074779Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.690716ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-756894-m03\" ","response":"range_response_count:1 size:2141"}
	{"level":"info","ts":"2024-09-20T19:22:12.074815Z","caller":"traceutil/trace.go:171","msg":"trace[1363851090] range","detail":"{range_begin:/registry/minions/multinode-756894-m03; range_end:; response_count:1; response_revision:589; }","duration":"103.735381ms","start":"2024-09-20T19:22:11.971073Z","end":"2024-09-20T19:22:12.074808Z","steps":["trace[1363851090] 'agreement among raft nodes before linearized reading'  (duration: 103.674148ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:25:22.660008Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T19:25:22.660155Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-756894","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	{"level":"warn","ts":"2024-09-20T19:25:22.660336Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T19:25:22.660488Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T19:25:22.738992Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T19:25:22.739054Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T19:25:22.739181Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e34fba8f5739efe8","current-leader-member-id":"e34fba8f5739efe8"}
	{"level":"info","ts":"2024-09-20T19:25:22.741890Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-20T19:25:22.742032Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-20T19:25:22.742057Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-756894","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	
	
	==> etcd [7b5e65f8c20a2386e254f6706107d0cc7f75f834b088e05d4f04adfc74984a60] <==
	{"level":"info","ts":"2024-09-20T19:27:05.439878Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-20T19:27:05.440235Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e34fba8f5739efe8","initial-advertise-peer-urls":["https://192.168.39.168:2380"],"listen-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.168:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T19:27:05.440313Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T19:27:05.441805Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","added-peer-id":"e34fba8f5739efe8","added-peer-peer-urls":["https://192.168.39.168:2380"]}
	{"level":"info","ts":"2024-09-20T19:27:05.442252Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:27:05.442337Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:27:05.441851Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T19:27:05.442761Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T19:27:06.631297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T19:27:06.631433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T19:27:06.631472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 received MsgPreVoteResp from e34fba8f5739efe8 at term 2"}
	{"level":"info","ts":"2024-09-20T19:27:06.631508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T19:27:06.631533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 received MsgVoteResp from e34fba8f5739efe8 at term 3"}
	{"level":"info","ts":"2024-09-20T19:27:06.631559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T19:27:06.631585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e34fba8f5739efe8 elected leader e34fba8f5739efe8 at term 3"}
	{"level":"info","ts":"2024-09-20T19:27:06.638261Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:27:06.638212Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e34fba8f5739efe8","local-member-attributes":"{Name:multinode-756894 ClientURLs:[https://192.168.39.168:2379]}","request-path":"/0/members/e34fba8f5739efe8/attributes","cluster-id":"f729467791c9db0d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T19:27:06.639072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:27:06.639329Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:27:06.639365Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:27:06.639748Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:27:06.640114Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:27:06.640888Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.168:2379"}
	{"level":"info","ts":"2024-09-20T19:27:06.640985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:27:51.122309Z","caller":"traceutil/trace.go:171","msg":"trace[1699313954] transaction","detail":"{read_only:false; response_revision:1030; number_of_response:1; }","duration":"206.099715ms","start":"2024-09-20T19:27:50.916160Z","end":"2024-09-20T19:27:51.122259Z","steps":["trace[1699313954] 'process raft request'  (duration: 205.955081ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:28:46 up 9 min,  0 users,  load average: 0.12, 0.13, 0.09
	Linux multinode-756894 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0475dc410c9bd2962c731c4869948eccbd23017afbe2bf565ebbd87fdeb2bf23] <==
	I0920 19:27:59.921815       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	I0920 19:28:09.922367       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:28:09.922395       1 main.go:299] handling current node
	I0920 19:28:09.922408       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:28:09.922412       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:28:09.922527       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0920 19:28:09.922549       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	I0920 19:28:19.921490       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:28:19.921548       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:28:19.921849       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0920 19:28:19.921908       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	I0920 19:28:19.922040       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:28:19.922071       1 main.go:299] handling current node
	I0920 19:28:29.924391       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:28:29.924513       1 main.go:299] handling current node
	I0920 19:28:29.924582       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:28:29.924589       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:28:29.925003       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0920 19:28:29.925088       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.2.0/24] 
	I0920 19:28:39.924381       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0920 19:28:39.924510       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.2.0/24] 
	I0920 19:28:39.924650       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:28:39.924672       1 main.go:299] handling current node
	I0920 19:28:39.924763       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:28:39.924786       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6] <==
	I0920 19:24:38.224187       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	I0920 19:24:48.217379       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:24:48.217506       1 main.go:299] handling current node
	I0920 19:24:48.217543       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:24:48.217563       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:24:48.217736       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0920 19:24:48.217763       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	I0920 19:24:58.215889       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:24:58.216006       1 main.go:299] handling current node
	I0920 19:24:58.216037       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:24:58.216055       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:24:58.216206       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0920 19:24:58.216228       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	I0920 19:25:08.217913       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:25:08.218008       1 main.go:299] handling current node
	I0920 19:25:08.218044       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:25:08.218052       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:25:08.218211       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0920 19:25:08.218243       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	I0920 19:25:18.224267       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:25:18.224371       1 main.go:299] handling current node
	I0920 19:25:18.224409       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:25:18.224428       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:25:18.224614       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0920 19:25:18.224666       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a6c2d47645b3c320308300d42f92d21313904e5f30c2e0843aaa7c588014c301] <==
	I0920 19:27:08.051463       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 19:27:08.051502       1 policy_source.go:224] refreshing policies
	I0920 19:27:08.058086       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 19:27:08.058164       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 19:27:08.058193       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 19:27:08.058218       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 19:27:08.058268       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 19:27:08.060074       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 19:27:08.060613       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 19:27:08.060850       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 19:27:08.060963       1 aggregator.go:171] initial CRD sync complete...
	I0920 19:27:08.060989       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 19:27:08.061011       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 19:27:08.061032       1 cache.go:39] Caches are synced for autoregister controller
	I0920 19:27:08.063361       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0920 19:27:08.067128       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0920 19:27:08.085574       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 19:27:08.877013       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 19:27:10.079393       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 19:27:10.198262       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 19:27:10.216224       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 19:27:10.284082       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 19:27:10.293426       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 19:27:11.351394       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 19:27:11.598385       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8] <==
	I0920 19:20:20.934082       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 19:20:20.949478       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 19:20:25.721483       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0920 19:20:26.022212       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0920 19:21:36.622095       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:60976: use of closed network connection
	E0920 19:21:36.805905       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:60992: use of closed network connection
	E0920 19:21:37.030494       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32792: use of closed network connection
	E0920 19:21:37.203996       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32810: use of closed network connection
	E0920 19:21:37.371898       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32826: use of closed network connection
	E0920 19:21:37.556950       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32838: use of closed network connection
	E0920 19:21:37.830820       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32874: use of closed network connection
	E0920 19:21:38.002893       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32892: use of closed network connection
	E0920 19:21:38.182972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32902: use of closed network connection
	E0920 19:21:38.356269       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32910: use of closed network connection
	E0920 19:22:11.874856       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0920 19:22:11.874870       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 19:22:11.874906       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 10.975µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0920 19:22:11.876250       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 19:22:11.876296       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0920 19:22:11.877468       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 19:22:11.877508       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0920 19:22:11.878659       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 19:22:11.878835       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.210266ms" method="PATCH" path="/api/v1/namespaces/default/events/multinode-756894-m03.17f70a23c909ce57" result=null
	E0920 19:22:11.880171       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.481087ms" method="GET" path="/api/v1/nodes/multinode-756894-m03" result=null
	I0920 19:25:22.660853       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b] <==
	I0920 19:22:58.835358       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m02"
	I0920 19:22:58.835589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:22:59.884223       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m02"
	I0920 19:22:59.884949       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-756894-m03\" does not exist"
	I0920 19:22:59.904652       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-756894-m03" podCIDRs=["10.244.3.0/24"]
	I0920 19:22:59.904824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:22:59.904848       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:22:59.904914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:00.310326       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:00.353080       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:00.713447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:10.227188       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:17.141483       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m02"
	I0920 19:23:17.142120       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:17.154115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:20.234824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:24:00.255253       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m03"
	I0920 19:24:00.255666       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m02"
	I0920 19:24:00.259259       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:24:00.281111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m02"
	I0920 19:24:00.284964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:24:00.312776       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.584036ms"
	I0920 19:24:00.313856       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.108µs"
	I0920 19:24:05.330578       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m02"
	I0920 19:24:15.416456       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	
	
	==> kube-controller-manager [c77a7e1a224a7e290b56fbe2966968e539e4d6246462134099cf5e363b83dffb] <==
	I0920 19:28:05.379641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m02"
	I0920 19:28:05.386258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.048µs"
	I0920 19:28:05.399085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="93.771µs"
	I0920 19:28:06.446312       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m02"
	I0920 19:28:07.130245       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.637981ms"
	I0920 19:28:07.130755       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="146.03µs"
	I0920 19:28:17.755589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m02"
	I0920 19:28:23.363678       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:23.382252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:23.615178       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:23.615328       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m02"
	I0920 19:28:24.737670       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m02"
	I0920 19:28:24.738132       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-756894-m03\" does not exist"
	I0920 19:28:24.754590       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-756894-m03" podCIDRs=["10.244.2.0/24"]
	I0920 19:28:24.755296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:24.755474       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:24.771985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:25.147608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:25.475102       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:26.518625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:35.118677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:42.951413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:42.952124       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m02"
	I0920 19:28:42.960660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:46.466250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	
	
	==> kube-proxy [12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 19:20:27.410928       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 19:20:27.436519       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0920 19:20:27.437783       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:20:27.488313       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 19:20:27.488360       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 19:20:27.488385       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:20:27.491367       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:20:27.491841       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:20:27.491870       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:20:27.493041       1 config.go:199] "Starting service config controller"
	I0920 19:20:27.493090       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:20:27.493129       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:20:27.493151       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:20:27.493597       1 config.go:328] "Starting node config controller"
	I0920 19:20:27.493645       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:20:27.593190       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:20:27.593389       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:20:27.593738       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [88b1326c5d456cb895967959e46bbd4ccd2743f8eebbf4ea2920d93996b086c5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 19:27:09.094249       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 19:27:09.103439       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0920 19:27:09.103521       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:27:09.184681       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 19:27:09.184809       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 19:27:09.184832       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:27:09.187279       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:27:09.187518       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:27:09.187548       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:27:09.189077       1 config.go:199] "Starting service config controller"
	I0920 19:27:09.189141       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:27:09.189173       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:27:09.189195       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:27:09.189790       1 config.go:328] "Starting node config controller"
	I0920 19:27:09.189818       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:27:09.290749       1 shared_informer.go:320] Caches are synced for node config
	I0920 19:27:09.290798       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:27:09.290819       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21] <==
	W0920 19:20:18.507151       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 19:20:18.508752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 19:20:18.508813       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:18.507208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 19:20:18.508932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:18.507637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 19:20:18.509062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.385403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 19:20:19.385491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.394665       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 19:20:19.394809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.451175       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 19:20:19.451583       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 19:20:19.458030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:20:19.458111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.534432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:20:19.534535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.556495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:20:19.556583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.621516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 19:20:19.621641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.718783       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 19:20:19.718891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 19:20:21.602502       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 19:25:22.661422       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ca9bd3ed7c925e80838dbb28a891c683435a9d040feae22c3e18aa8e7273cc15] <==
	I0920 19:27:05.935135       1 serving.go:386] Generated self-signed cert in-memory
	W0920 19:27:07.922790       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 19:27:07.922885       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 19:27:07.922971       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 19:27:07.922983       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 19:27:07.997294       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 19:27:08.001748       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:27:08.005799       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 19:27:08.005866       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 19:27:08.008998       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 19:27:08.010832       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 19:27:08.106294       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:27:14 multinode-756894 kubelet[2912]: E0920 19:27:14.166155    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860434165422817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:27:14 multinode-756894 kubelet[2912]: E0920 19:27:14.166210    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860434165422817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:27:24 multinode-756894 kubelet[2912]: E0920 19:27:24.167394    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860444167110444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:27:24 multinode-756894 kubelet[2912]: E0920 19:27:24.167460    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860444167110444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:27:34 multinode-756894 kubelet[2912]: E0920 19:27:34.168810    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860454168456437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:27:34 multinode-756894 kubelet[2912]: E0920 19:27:34.169206    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860454168456437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:27:44 multinode-756894 kubelet[2912]: E0920 19:27:44.170354    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860464170068570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:27:44 multinode-756894 kubelet[2912]: E0920 19:27:44.170405    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860464170068570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:27:54 multinode-756894 kubelet[2912]: E0920 19:27:54.176863    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860474176333500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:27:54 multinode-756894 kubelet[2912]: E0920 19:27:54.176907    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860474176333500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:28:04 multinode-756894 kubelet[2912]: E0920 19:28:04.162384    2912 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 19:28:04 multinode-756894 kubelet[2912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 19:28:04 multinode-756894 kubelet[2912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 19:28:04 multinode-756894 kubelet[2912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 19:28:04 multinode-756894 kubelet[2912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 19:28:04 multinode-756894 kubelet[2912]: E0920 19:28:04.178285    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860484177527349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:28:04 multinode-756894 kubelet[2912]: E0920 19:28:04.178332    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860484177527349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:28:14 multinode-756894 kubelet[2912]: E0920 19:28:14.179876    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860494179565281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:28:14 multinode-756894 kubelet[2912]: E0920 19:28:14.179917    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860494179565281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:28:24 multinode-756894 kubelet[2912]: E0920 19:28:24.185994    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860504184448750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:28:24 multinode-756894 kubelet[2912]: E0920 19:28:24.186053    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860504184448750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:28:34 multinode-756894 kubelet[2912]: E0920 19:28:34.187808    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860514187593367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:28:34 multinode-756894 kubelet[2912]: E0920 19:28:34.187833    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860514187593367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:28:44 multinode-756894 kubelet[2912]: E0920 19:28:44.189461    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860524189057693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:28:44 multinode-756894 kubelet[2912]: E0920 19:28:44.189505    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860524189057693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:28:45.396143  783402 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19678-739831/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-756894 -n multinode-756894
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-756894 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 stop
E0920 19:29:55.905520  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756894 stop: exit status 82 (2m0.46048901s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-756894-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-756894 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-756894 status: (18.666334654s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-756894 status --alsologtostderr: (3.359893955s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-756894 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-756894 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-756894 -n multinode-756894
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-756894 logs -n 25: (1.415000599s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp multinode-756894-m02:/home/docker/cp-test.txt                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894:/home/docker/cp-test_multinode-756894-m02_multinode-756894.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n multinode-756894 sudo cat                                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-756894-m02_multinode-756894.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp multinode-756894-m02:/home/docker/cp-test.txt                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03:/home/docker/cp-test_multinode-756894-m02_multinode-756894-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n multinode-756894-m03 sudo cat                                   | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-756894-m02_multinode-756894-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp testdata/cp-test.txt                                                | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp multinode-756894-m03:/home/docker/cp-test.txt                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3797588952/001/cp-test_multinode-756894-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp multinode-756894-m03:/home/docker/cp-test.txt                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894:/home/docker/cp-test_multinode-756894-m03_multinode-756894.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n multinode-756894 sudo cat                                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-756894-m03_multinode-756894.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-756894 cp multinode-756894-m03:/home/docker/cp-test.txt                       | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m02:/home/docker/cp-test_multinode-756894-m03_multinode-756894-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n multinode-756894-m02 sudo cat                                   | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-756894-m03_multinode-756894-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-756894 node stop m03                                                          | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	| node    | multinode-756894 node start                                                             | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-756894                                                                | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC |                     |
	| stop    | -p multinode-756894                                                                     | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC |                     |
	| start   | -p multinode-756894                                                                     | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:28 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-756894                                                                | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:28 UTC |                     |
	| node    | multinode-756894 node delete                                                            | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:28 UTC | 20 Sep 24 19:28 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-756894 stop                                                                   | multinode-756894 | jenkins | v1.34.0 | 20 Sep 24 19:28 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:25:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:25:21.724886  782266 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:25:21.725016  782266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:21.725027  782266 out.go:358] Setting ErrFile to fd 2...
	I0920 19:25:21.725033  782266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:21.725242  782266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 19:25:21.725833  782266 out.go:352] Setting JSON to false
	I0920 19:25:21.726945  782266 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11272,"bootTime":1726849050,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:25:21.727058  782266 start.go:139] virtualization: kvm guest
	I0920 19:25:21.730068  782266 out.go:177] * [multinode-756894] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:25:21.731556  782266 notify.go:220] Checking for updates...
	I0920 19:25:21.731617  782266 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:25:21.733259  782266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:25:21.734632  782266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 19:25:21.735915  782266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 19:25:21.737540  782266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:25:21.738918  782266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:25:21.740601  782266 config.go:182] Loaded profile config "multinode-756894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:21.740725  782266 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:25:21.741244  782266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:25:21.741299  782266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:25:21.756766  782266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44201
	I0920 19:25:21.757319  782266 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:25:21.757902  782266 main.go:141] libmachine: Using API Version  1
	I0920 19:25:21.757925  782266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:25:21.758315  782266 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:25:21.758499  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:25:21.794138  782266 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:25:21.795633  782266 start.go:297] selected driver: kvm2
	I0920 19:25:21.795654  782266 start.go:901] validating driver "kvm2" against &{Name:multinode-756894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-756894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:25:21.795792  782266 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:25:21.796175  782266 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:25:21.796272  782266 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:25:21.811914  782266 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:25:21.812633  782266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:25:21.812664  782266 cni.go:84] Creating CNI manager for ""
	I0920 19:25:21.812736  782266 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 19:25:21.812807  782266 start.go:340] cluster config:
	{Name:multinode-756894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-756894 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:25:21.812967  782266 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:25:21.815324  782266 out.go:177] * Starting "multinode-756894" primary control-plane node in "multinode-756894" cluster
	I0920 19:25:21.816500  782266 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:21.816553  782266 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 19:25:21.816568  782266 cache.go:56] Caching tarball of preloaded images
	I0920 19:25:21.816642  782266 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:25:21.816655  782266 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:25:21.816827  782266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/config.json ...
	I0920 19:25:21.817044  782266 start.go:360] acquireMachinesLock for multinode-756894: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:25:21.817103  782266 start.go:364] duration metric: took 38.186µs to acquireMachinesLock for "multinode-756894"
	I0920 19:25:21.817124  782266 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:25:21.817130  782266 fix.go:54] fixHost starting: 
	I0920 19:25:21.817442  782266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:25:21.817475  782266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:25:21.831894  782266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I0920 19:25:21.832292  782266 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:25:21.832749  782266 main.go:141] libmachine: Using API Version  1
	I0920 19:25:21.832775  782266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:25:21.833212  782266 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:25:21.833419  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:25:21.833580  782266 main.go:141] libmachine: (multinode-756894) Calling .GetState
	I0920 19:25:21.835220  782266 fix.go:112] recreateIfNeeded on multinode-756894: state=Running err=<nil>
	W0920 19:25:21.835241  782266 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:25:21.840897  782266 out.go:177] * Updating the running kvm2 "multinode-756894" VM ...
	I0920 19:25:21.845426  782266 machine.go:93] provisionDockerMachine start ...
	I0920 19:25:21.845453  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:25:21.845697  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:25:21.848439  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:21.848911  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:21.848944  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:21.849093  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:25:21.849266  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:21.849405  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:21.849542  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:25:21.849668  782266 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:21.849875  782266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0920 19:25:21.849895  782266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:25:21.960038  782266 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-756894
	
	I0920 19:25:21.960069  782266 main.go:141] libmachine: (multinode-756894) Calling .GetMachineName
	I0920 19:25:21.960301  782266 buildroot.go:166] provisioning hostname "multinode-756894"
	I0920 19:25:21.960351  782266 main.go:141] libmachine: (multinode-756894) Calling .GetMachineName
	I0920 19:25:21.960582  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:25:21.963232  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:21.963576  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:21.963604  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:21.963755  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:25:21.963943  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:21.964120  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:21.964268  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:25:21.964434  782266 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:21.964634  782266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0920 19:25:21.964650  782266 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-756894 && echo "multinode-756894" | sudo tee /etc/hostname
	I0920 19:25:22.086424  782266 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-756894
	
	I0920 19:25:22.086476  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:25:22.089401  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.089699  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:22.089730  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.089976  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:25:22.090185  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:22.090374  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:22.090540  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:25:22.090706  782266 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:22.090974  782266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0920 19:25:22.090998  782266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-756894' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-756894/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-756894' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:25:22.204031  782266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:25:22.204063  782266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 19:25:22.204106  782266 buildroot.go:174] setting up certificates
	I0920 19:25:22.204128  782266 provision.go:84] configureAuth start
	I0920 19:25:22.204145  782266 main.go:141] libmachine: (multinode-756894) Calling .GetMachineName
	I0920 19:25:22.204406  782266 main.go:141] libmachine: (multinode-756894) Calling .GetIP
	I0920 19:25:22.207156  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.207506  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:22.207530  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.207664  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:25:22.209875  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.210210  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:22.210250  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.210354  782266 provision.go:143] copyHostCerts
	I0920 19:25:22.210385  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 19:25:22.210431  782266 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 19:25:22.210447  782266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 19:25:22.210518  782266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 19:25:22.210601  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 19:25:22.210618  782266 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 19:25:22.210624  782266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 19:25:22.210655  782266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 19:25:22.210715  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 19:25:22.210731  782266 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 19:25:22.210736  782266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 19:25:22.210759  782266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 19:25:22.210818  782266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.multinode-756894 san=[127.0.0.1 192.168.39.168 localhost minikube multinode-756894]
	I0920 19:25:22.375032  782266 provision.go:177] copyRemoteCerts
	I0920 19:25:22.375099  782266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:25:22.375124  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:25:22.377912  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.378221  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:22.378252  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.378476  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:25:22.378691  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:22.378870  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:25:22.379022  782266 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/multinode-756894/id_rsa Username:docker}
	I0920 19:25:22.461814  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 19:25:22.461901  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:25:22.487621  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 19:25:22.487700  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0920 19:25:22.512583  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 19:25:22.512676  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 19:25:22.538002  782266 provision.go:87] duration metric: took 333.853129ms to configureAuth
	I0920 19:25:22.538034  782266 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:25:22.538311  782266 config.go:182] Loaded profile config "multinode-756894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:22.538407  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:25:22.541117  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.541449  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:25:22.541470  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:25:22.541666  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:25:22.541876  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:22.542027  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:25:22.542157  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:25:22.542345  782266 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:22.542520  782266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0920 19:25:22.542535  782266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:26:53.297866  782266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:26:53.297908  782266 machine.go:96] duration metric: took 1m31.452461199s to provisionDockerMachine
	I0920 19:26:53.297927  782266 start.go:293] postStartSetup for "multinode-756894" (driver="kvm2")
	I0920 19:26:53.297941  782266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:26:53.297960  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:26:53.298281  782266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:26:53.298308  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:26:53.301683  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.302134  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:26:53.302166  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.302345  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:26:53.302519  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:26:53.302663  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:26:53.302809  782266 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/multinode-756894/id_rsa Username:docker}
	I0920 19:26:53.387266  782266 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:26:53.391383  782266 command_runner.go:130] > NAME=Buildroot
	I0920 19:26:53.391405  782266 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0920 19:26:53.391412  782266 command_runner.go:130] > ID=buildroot
	I0920 19:26:53.391419  782266 command_runner.go:130] > VERSION_ID=2023.02.9
	I0920 19:26:53.391426  782266 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0920 19:26:53.391475  782266 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:26:53.391491  782266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 19:26:53.391576  782266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 19:26:53.391691  782266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 19:26:53.391705  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /etc/ssl/certs/7484972.pem
	I0920 19:26:53.391798  782266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:26:53.401643  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 19:26:53.427284  782266 start.go:296] duration metric: took 129.340801ms for postStartSetup
	I0920 19:26:53.427365  782266 fix.go:56] duration metric: took 1m31.610234241s for fixHost
	I0920 19:26:53.427424  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:26:53.430056  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.430537  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:26:53.430571  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.430735  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:26:53.430961  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:26:53.431104  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:26:53.431238  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:26:53.431376  782266 main.go:141] libmachine: Using SSH client type: native
	I0920 19:26:53.431539  782266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0920 19:26:53.431548  782266 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:26:53.539770  782266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726860413.517005152
	
	I0920 19:26:53.539800  782266 fix.go:216] guest clock: 1726860413.517005152
	I0920 19:26:53.539810  782266 fix.go:229] Guest: 2024-09-20 19:26:53.517005152 +0000 UTC Remote: 2024-09-20 19:26:53.427369408 +0000 UTC m=+91.740554816 (delta=89.635744ms)
	I0920 19:26:53.539862  782266 fix.go:200] guest clock delta is within tolerance: 89.635744ms
	I0920 19:26:53.539870  782266 start.go:83] releasing machines lock for "multinode-756894", held for 1m31.722753741s
	I0920 19:26:53.539898  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:26:53.540133  782266 main.go:141] libmachine: (multinode-756894) Calling .GetIP
	I0920 19:26:53.543025  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.543374  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:26:53.543409  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.543604  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:26:53.544241  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:26:53.544392  782266 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:26:53.544482  782266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:26:53.544539  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:26:53.544611  782266 ssh_runner.go:195] Run: cat /version.json
	I0920 19:26:53.544637  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:26:53.547164  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.547428  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:26:53.547465  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.547572  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.547618  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:26:53.547771  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:26:53.547903  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:26:53.548024  782266 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/multinode-756894/id_rsa Username:docker}
	I0920 19:26:53.548056  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:26:53.548081  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:26:53.548263  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:26:53.548425  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:26:53.548560  782266 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:26:53.548726  782266 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/multinode-756894/id_rsa Username:docker}
	I0920 19:26:53.655461  782266 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0920 19:26:53.655523  782266 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "fcd4ba3dbb1ef408e3a4b79c864df2496ddd3848"}
	I0920 19:26:53.655649  782266 ssh_runner.go:195] Run: systemctl --version
	I0920 19:26:53.661895  782266 command_runner.go:130] > systemd 252 (252)
	I0920 19:26:53.661929  782266 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0920 19:26:53.661988  782266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:26:53.824376  782266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 19:26:53.830208  782266 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0920 19:26:53.830400  782266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:26:53.830459  782266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:26:53.839724  782266 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 19:26:53.839752  782266 start.go:495] detecting cgroup driver to use...
	I0920 19:26:53.839823  782266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:26:53.856427  782266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:26:53.871269  782266 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:26:53.871335  782266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:26:53.885564  782266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:26:53.899946  782266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:26:54.053928  782266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:26:54.189782  782266 docker.go:233] disabling docker service ...
	I0920 19:26:54.189882  782266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:26:54.206901  782266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:26:54.220619  782266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:26:54.373767  782266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:26:54.524979  782266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:26:54.541881  782266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:26:54.562558  782266 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0920 19:26:54.562600  782266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:26:54.562657  782266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.575136  782266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:26:54.575208  782266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.587540  782266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.598353  782266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.608973  782266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:26:54.619996  782266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.630578  782266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.641566  782266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:26:54.652175  782266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:26:54.662399  782266 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0920 19:26:54.662467  782266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:26:54.671994  782266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:26:54.808823  782266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:27:01.183060  782266 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.374198752s)
	I0920 19:27:01.183091  782266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:27:01.183143  782266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:27:01.188216  782266 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0920 19:27:01.188238  782266 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0920 19:27:01.188246  782266 command_runner.go:130] > Device: 0,22	Inode: 1303        Links: 1
	I0920 19:27:01.188257  782266 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 19:27:01.188266  782266 command_runner.go:130] > Access: 2024-09-20 19:27:01.049940129 +0000
	I0920 19:27:01.188279  782266 command_runner.go:130] > Modify: 2024-09-20 19:27:01.049940129 +0000
	I0920 19:27:01.188285  782266 command_runner.go:130] > Change: 2024-09-20 19:27:01.049940129 +0000
	I0920 19:27:01.188291  782266 command_runner.go:130] >  Birth: -
	I0920 19:27:01.188337  782266 start.go:563] Will wait 60s for crictl version
	I0920 19:27:01.188391  782266 ssh_runner.go:195] Run: which crictl
	I0920 19:27:01.192043  782266 command_runner.go:130] > /usr/bin/crictl
	I0920 19:27:01.192121  782266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:27:01.227645  782266 command_runner.go:130] > Version:  0.1.0
	I0920 19:27:01.227671  782266 command_runner.go:130] > RuntimeName:  cri-o
	I0920 19:27:01.227704  782266 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0920 19:27:01.227726  782266 command_runner.go:130] > RuntimeApiVersion:  v1
	I0920 19:27:01.228927  782266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:27:01.229017  782266 ssh_runner.go:195] Run: crio --version
	I0920 19:27:01.255656  782266 command_runner.go:130] > crio version 1.29.1
	I0920 19:27:01.255679  782266 command_runner.go:130] > Version:        1.29.1
	I0920 19:27:01.255685  782266 command_runner.go:130] > GitCommit:      unknown
	I0920 19:27:01.255689  782266 command_runner.go:130] > GitCommitDate:  unknown
	I0920 19:27:01.255693  782266 command_runner.go:130] > GitTreeState:   clean
	I0920 19:27:01.255699  782266 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0920 19:27:01.255703  782266 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 19:27:01.255707  782266 command_runner.go:130] > Compiler:       gc
	I0920 19:27:01.255716  782266 command_runner.go:130] > Platform:       linux/amd64
	I0920 19:27:01.255721  782266 command_runner.go:130] > Linkmode:       dynamic
	I0920 19:27:01.255725  782266 command_runner.go:130] > BuildTags:      
	I0920 19:27:01.255729  782266 command_runner.go:130] >   containers_image_ostree_stub
	I0920 19:27:01.255733  782266 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 19:27:01.255737  782266 command_runner.go:130] >   btrfs_noversion
	I0920 19:27:01.255742  782266 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 19:27:01.255746  782266 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 19:27:01.255752  782266 command_runner.go:130] >   seccomp
	I0920 19:27:01.255756  782266 command_runner.go:130] > LDFlags:          unknown
	I0920 19:27:01.255761  782266 command_runner.go:130] > SeccompEnabled:   true
	I0920 19:27:01.255764  782266 command_runner.go:130] > AppArmorEnabled:  false
	I0920 19:27:01.256964  782266 ssh_runner.go:195] Run: crio --version
	I0920 19:27:01.287442  782266 command_runner.go:130] > crio version 1.29.1
	I0920 19:27:01.287474  782266 command_runner.go:130] > Version:        1.29.1
	I0920 19:27:01.287483  782266 command_runner.go:130] > GitCommit:      unknown
	I0920 19:27:01.287489  782266 command_runner.go:130] > GitCommitDate:  unknown
	I0920 19:27:01.287495  782266 command_runner.go:130] > GitTreeState:   clean
	I0920 19:27:01.287503  782266 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0920 19:27:01.287510  782266 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 19:27:01.287518  782266 command_runner.go:130] > Compiler:       gc
	I0920 19:27:01.287525  782266 command_runner.go:130] > Platform:       linux/amd64
	I0920 19:27:01.287533  782266 command_runner.go:130] > Linkmode:       dynamic
	I0920 19:27:01.287545  782266 command_runner.go:130] > BuildTags:      
	I0920 19:27:01.287556  782266 command_runner.go:130] >   containers_image_ostree_stub
	I0920 19:27:01.287563  782266 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 19:27:01.287569  782266 command_runner.go:130] >   btrfs_noversion
	I0920 19:27:01.287580  782266 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 19:27:01.287586  782266 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 19:27:01.287591  782266 command_runner.go:130] >   seccomp
	I0920 19:27:01.287606  782266 command_runner.go:130] > LDFlags:          unknown
	I0920 19:27:01.287615  782266 command_runner.go:130] > SeccompEnabled:   true
	I0920 19:27:01.287622  782266 command_runner.go:130] > AppArmorEnabled:  false
	I0920 19:27:01.290549  782266 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:27:01.292060  782266 main.go:141] libmachine: (multinode-756894) Calling .GetIP
	I0920 19:27:01.295096  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:27:01.295554  782266 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:27:01.295591  782266 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:27:01.295819  782266 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 19:27:01.299937  782266 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0920 19:27:01.300034  782266 kubeadm.go:883] updating cluster {Name:multinode-756894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-756894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:27:01.300202  782266 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:27:01.300255  782266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:27:01.351339  782266 command_runner.go:130] > {
	I0920 19:27:01.351365  782266 command_runner.go:130] >   "images": [
	I0920 19:27:01.351370  782266 command_runner.go:130] >     {
	I0920 19:27:01.351378  782266 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 19:27:01.351382  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.351388  782266 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 19:27:01.351407  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351412  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.351422  782266 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 19:27:01.351433  782266 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 19:27:01.351438  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351445  782266 command_runner.go:130] >       "size": "87190579",
	I0920 19:27:01.351452  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.351460  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.351468  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.351476  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.351479  782266 command_runner.go:130] >     },
	I0920 19:27:01.351482  782266 command_runner.go:130] >     {
	I0920 19:27:01.351488  782266 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 19:27:01.351492  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.351498  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 19:27:01.351501  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351506  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.351513  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 19:27:01.351526  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 19:27:01.351535  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351542  782266 command_runner.go:130] >       "size": "1363676",
	I0920 19:27:01.351552  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.351561  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.351570  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.351574  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.351579  782266 command_runner.go:130] >     },
	I0920 19:27:01.351582  782266 command_runner.go:130] >     {
	I0920 19:27:01.351597  782266 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 19:27:01.351603  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.351608  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 19:27:01.351614  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351632  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.351645  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 19:27:01.351664  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 19:27:01.351672  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351682  782266 command_runner.go:130] >       "size": "31470524",
	I0920 19:27:01.351689  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.351693  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.351697  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.351703  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.351709  782266 command_runner.go:130] >     },
	I0920 19:27:01.351717  782266 command_runner.go:130] >     {
	I0920 19:27:01.351727  782266 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 19:27:01.351735  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.351743  782266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 19:27:01.351748  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351755  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.351770  782266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 19:27:01.351793  782266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 19:27:01.351802  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351812  782266 command_runner.go:130] >       "size": "63273227",
	I0920 19:27:01.351819  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.351824  782266 command_runner.go:130] >       "username": "nonroot",
	I0920 19:27:01.351833  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.351842  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.351851  782266 command_runner.go:130] >     },
	I0920 19:27:01.351857  782266 command_runner.go:130] >     {
	I0920 19:27:01.351867  782266 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 19:27:01.351877  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.351887  782266 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 19:27:01.351895  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351904  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.351917  782266 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 19:27:01.351927  782266 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 19:27:01.351935  782266 command_runner.go:130] >       ],
	I0920 19:27:01.351945  782266 command_runner.go:130] >       "size": "149009664",
	I0920 19:27:01.351956  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.351966  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.351974  782266 command_runner.go:130] >       },
	I0920 19:27:01.351983  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.351992  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.352002  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.352008  782266 command_runner.go:130] >     },
	I0920 19:27:01.352012  782266 command_runner.go:130] >     {
	I0920 19:27:01.352024  782266 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 19:27:01.352033  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.352042  782266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 19:27:01.352051  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352060  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.352074  782266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 19:27:01.352088  782266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 19:27:01.352097  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352103  782266 command_runner.go:130] >       "size": "95237600",
	I0920 19:27:01.352108  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.352114  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.352122  782266 command_runner.go:130] >       },
	I0920 19:27:01.352132  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.352139  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.352148  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.352154  782266 command_runner.go:130] >     },
	I0920 19:27:01.352162  782266 command_runner.go:130] >     {
	I0920 19:27:01.352191  782266 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 19:27:01.352206  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.352214  782266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 19:27:01.352220  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352227  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.352243  782266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 19:27:01.352258  782266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 19:27:01.352266  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352279  782266 command_runner.go:130] >       "size": "89437508",
	I0920 19:27:01.352287  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.352294  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.352302  782266 command_runner.go:130] >       },
	I0920 19:27:01.352307  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.352312  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.352318  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.352325  782266 command_runner.go:130] >     },
	I0920 19:27:01.352331  782266 command_runner.go:130] >     {
	I0920 19:27:01.352345  782266 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 19:27:01.352353  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.352362  782266 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 19:27:01.352370  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352377  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.352397  782266 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 19:27:01.352411  782266 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 19:27:01.352419  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352425  782266 command_runner.go:130] >       "size": "92733849",
	I0920 19:27:01.352434  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.352439  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.352445  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.352451  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.352455  782266 command_runner.go:130] >     },
	I0920 19:27:01.352460  782266 command_runner.go:130] >     {
	I0920 19:27:01.352468  782266 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 19:27:01.352473  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.352480  782266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 19:27:01.352484  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352489  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.352504  782266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 19:27:01.352514  782266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 19:27:01.352519  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352525  782266 command_runner.go:130] >       "size": "68420934",
	I0920 19:27:01.352532  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.352539  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.352544  782266 command_runner.go:130] >       },
	I0920 19:27:01.352551  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.352557  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.352563  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.352569  782266 command_runner.go:130] >     },
	I0920 19:27:01.352575  782266 command_runner.go:130] >     {
	I0920 19:27:01.352589  782266 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 19:27:01.352598  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.352606  782266 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 19:27:01.352614  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352627  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.352637  782266 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 19:27:01.352651  782266 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 19:27:01.352660  782266 command_runner.go:130] >       ],
	I0920 19:27:01.352667  782266 command_runner.go:130] >       "size": "742080",
	I0920 19:27:01.352676  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.352683  782266 command_runner.go:130] >         "value": "65535"
	I0920 19:27:01.352692  782266 command_runner.go:130] >       },
	I0920 19:27:01.352701  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.352709  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.352713  782266 command_runner.go:130] >       "pinned": true
	I0920 19:27:01.352718  782266 command_runner.go:130] >     }
	I0920 19:27:01.352723  782266 command_runner.go:130] >   ]
	I0920 19:27:01.352732  782266 command_runner.go:130] > }
	I0920 19:27:01.352950  782266 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:27:01.352963  782266 crio.go:433] Images already preloaded, skipping extraction
	I0920 19:27:01.353021  782266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:27:01.387829  782266 command_runner.go:130] > {
	I0920 19:27:01.387850  782266 command_runner.go:130] >   "images": [
	I0920 19:27:01.387858  782266 command_runner.go:130] >     {
	I0920 19:27:01.387867  782266 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 19:27:01.387873  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.387890  782266 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 19:27:01.387894  782266 command_runner.go:130] >       ],
	I0920 19:27:01.387901  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.387908  782266 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 19:27:01.387915  782266 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 19:27:01.387925  782266 command_runner.go:130] >       ],
	I0920 19:27:01.387930  782266 command_runner.go:130] >       "size": "87190579",
	I0920 19:27:01.387934  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.387937  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.387945  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.387951  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.387954  782266 command_runner.go:130] >     },
	I0920 19:27:01.387958  782266 command_runner.go:130] >     {
	I0920 19:27:01.387966  782266 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 19:27:01.387972  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.387982  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 19:27:01.387987  782266 command_runner.go:130] >       ],
	I0920 19:27:01.387992  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388002  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 19:27:01.388017  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 19:27:01.388024  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388031  782266 command_runner.go:130] >       "size": "1363676",
	I0920 19:27:01.388035  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.388041  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388048  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388051  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388055  782266 command_runner.go:130] >     },
	I0920 19:27:01.388059  782266 command_runner.go:130] >     {
	I0920 19:27:01.388065  782266 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 19:27:01.388071  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388080  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 19:27:01.388086  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388091  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388098  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 19:27:01.388108  782266 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 19:27:01.388111  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388115  782266 command_runner.go:130] >       "size": "31470524",
	I0920 19:27:01.388119  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.388123  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388127  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388131  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388135  782266 command_runner.go:130] >     },
	I0920 19:27:01.388138  782266 command_runner.go:130] >     {
	I0920 19:27:01.388144  782266 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 19:27:01.388150  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388155  782266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 19:27:01.388161  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388165  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388172  782266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 19:27:01.388182  782266 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 19:27:01.388188  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388192  782266 command_runner.go:130] >       "size": "63273227",
	I0920 19:27:01.388196  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.388200  782266 command_runner.go:130] >       "username": "nonroot",
	I0920 19:27:01.388209  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388215  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388219  782266 command_runner.go:130] >     },
	I0920 19:27:01.388222  782266 command_runner.go:130] >     {
	I0920 19:27:01.388228  782266 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 19:27:01.388234  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388239  782266 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 19:27:01.388244  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388248  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388255  782266 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 19:27:01.388263  782266 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 19:27:01.388269  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388273  782266 command_runner.go:130] >       "size": "149009664",
	I0920 19:27:01.388277  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.388283  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.388286  782266 command_runner.go:130] >       },
	I0920 19:27:01.388291  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388296  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388300  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388305  782266 command_runner.go:130] >     },
	I0920 19:27:01.388308  782266 command_runner.go:130] >     {
	I0920 19:27:01.388314  782266 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 19:27:01.388320  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388325  782266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 19:27:01.388331  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388335  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388343  782266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 19:27:01.388353  782266 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 19:27:01.388358  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388362  782266 command_runner.go:130] >       "size": "95237600",
	I0920 19:27:01.388367  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.388371  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.388374  782266 command_runner.go:130] >       },
	I0920 19:27:01.388378  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388382  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388386  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388389  782266 command_runner.go:130] >     },
	I0920 19:27:01.388393  782266 command_runner.go:130] >     {
	I0920 19:27:01.388401  782266 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 19:27:01.388405  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388412  782266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 19:27:01.388416  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388420  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388429  782266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 19:27:01.388440  782266 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 19:27:01.388445  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388450  782266 command_runner.go:130] >       "size": "89437508",
	I0920 19:27:01.388454  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.388460  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.388463  782266 command_runner.go:130] >       },
	I0920 19:27:01.388467  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388471  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388474  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388478  782266 command_runner.go:130] >     },
	I0920 19:27:01.388481  782266 command_runner.go:130] >     {
	I0920 19:27:01.388488  782266 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 19:27:01.388492  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388497  782266 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 19:27:01.388500  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388504  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388520  782266 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 19:27:01.388529  782266 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 19:27:01.388533  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388537  782266 command_runner.go:130] >       "size": "92733849",
	I0920 19:27:01.388541  782266 command_runner.go:130] >       "uid": null,
	I0920 19:27:01.388545  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388549  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388553  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388557  782266 command_runner.go:130] >     },
	I0920 19:27:01.388560  782266 command_runner.go:130] >     {
	I0920 19:27:01.388566  782266 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 19:27:01.388573  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388577  782266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 19:27:01.388582  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388585  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388594  782266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 19:27:01.388602  782266 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 19:27:01.388608  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388613  782266 command_runner.go:130] >       "size": "68420934",
	I0920 19:27:01.388616  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.388620  782266 command_runner.go:130] >         "value": "0"
	I0920 19:27:01.388623  782266 command_runner.go:130] >       },
	I0920 19:27:01.388627  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388631  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388634  782266 command_runner.go:130] >       "pinned": false
	I0920 19:27:01.388637  782266 command_runner.go:130] >     },
	I0920 19:27:01.388641  782266 command_runner.go:130] >     {
	I0920 19:27:01.388646  782266 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 19:27:01.388652  782266 command_runner.go:130] >       "repoTags": [
	I0920 19:27:01.388656  782266 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 19:27:01.388660  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388664  782266 command_runner.go:130] >       "repoDigests": [
	I0920 19:27:01.388672  782266 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 19:27:01.388682  782266 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 19:27:01.388688  782266 command_runner.go:130] >       ],
	I0920 19:27:01.388692  782266 command_runner.go:130] >       "size": "742080",
	I0920 19:27:01.388695  782266 command_runner.go:130] >       "uid": {
	I0920 19:27:01.388699  782266 command_runner.go:130] >         "value": "65535"
	I0920 19:27:01.388703  782266 command_runner.go:130] >       },
	I0920 19:27:01.388706  782266 command_runner.go:130] >       "username": "",
	I0920 19:27:01.388710  782266 command_runner.go:130] >       "spec": null,
	I0920 19:27:01.388714  782266 command_runner.go:130] >       "pinned": true
	I0920 19:27:01.388717  782266 command_runner.go:130] >     }
	I0920 19:27:01.388720  782266 command_runner.go:130] >   ]
	I0920 19:27:01.388723  782266 command_runner.go:130] > }
	I0920 19:27:01.388836  782266 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:27:01.388847  782266 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:27:01.388873  782266 kubeadm.go:934] updating node { 192.168.39.168 8443 v1.31.1 crio true true} ...
	I0920 19:27:01.388985  782266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-756894 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-756894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:27:01.389050  782266 ssh_runner.go:195] Run: crio config
	I0920 19:27:01.424121  782266 command_runner.go:130] ! time="2024-09-20 19:27:01.401150239Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0920 19:27:01.429290  782266 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0920 19:27:01.440607  782266 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0920 19:27:01.440630  782266 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0920 19:27:01.440636  782266 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0920 19:27:01.440641  782266 command_runner.go:130] > #
	I0920 19:27:01.440655  782266 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0920 19:27:01.440665  782266 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0920 19:27:01.440674  782266 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0920 19:27:01.440685  782266 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0920 19:27:01.440694  782266 command_runner.go:130] > # reload'.
	I0920 19:27:01.440701  782266 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0920 19:27:01.440710  782266 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0920 19:27:01.440717  782266 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0920 19:27:01.440724  782266 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0920 19:27:01.440733  782266 command_runner.go:130] > [crio]
	I0920 19:27:01.440745  782266 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0920 19:27:01.440754  782266 command_runner.go:130] > # containers images, in this directory.
	I0920 19:27:01.440764  782266 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0920 19:27:01.440780  782266 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0920 19:27:01.440787  782266 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0920 19:27:01.440796  782266 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0920 19:27:01.440803  782266 command_runner.go:130] > # imagestore = ""
	I0920 19:27:01.440811  782266 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0920 19:27:01.440819  782266 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0920 19:27:01.440829  782266 command_runner.go:130] > storage_driver = "overlay"
	I0920 19:27:01.440847  782266 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0920 19:27:01.440859  782266 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0920 19:27:01.440867  782266 command_runner.go:130] > storage_option = [
	I0920 19:27:01.440874  782266 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0920 19:27:01.440879  782266 command_runner.go:130] > ]
	I0920 19:27:01.440885  782266 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0920 19:27:01.440893  782266 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0920 19:27:01.440897  782266 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0920 19:27:01.440905  782266 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0920 19:27:01.440915  782266 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0920 19:27:01.440925  782266 command_runner.go:130] > # always happen on a node reboot
	I0920 19:27:01.440936  782266 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0920 19:27:01.440951  782266 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0920 19:27:01.440964  782266 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0920 19:27:01.440974  782266 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0920 19:27:01.440982  782266 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0920 19:27:01.440989  782266 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0920 19:27:01.441001  782266 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0920 19:27:01.441011  782266 command_runner.go:130] > # internal_wipe = true
	I0920 19:27:01.441026  782266 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0920 19:27:01.441037  782266 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0920 19:27:01.441046  782266 command_runner.go:130] > # internal_repair = false
	I0920 19:27:01.441058  782266 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0920 19:27:01.441069  782266 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0920 19:27:01.441077  782266 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0920 19:27:01.441084  782266 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0920 19:27:01.441099  782266 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0920 19:27:01.441109  782266 command_runner.go:130] > [crio.api]
	I0920 19:27:01.441121  782266 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0920 19:27:01.441130  782266 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0920 19:27:01.441141  782266 command_runner.go:130] > # IP address on which the stream server will listen.
	I0920 19:27:01.441150  782266 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0920 19:27:01.441160  782266 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0920 19:27:01.441168  782266 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0920 19:27:01.441177  782266 command_runner.go:130] > # stream_port = "0"
	I0920 19:27:01.441186  782266 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0920 19:27:01.441196  782266 command_runner.go:130] > # stream_enable_tls = false
	I0920 19:27:01.441208  782266 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0920 19:27:01.441217  782266 command_runner.go:130] > # stream_idle_timeout = ""
	I0920 19:27:01.441229  782266 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0920 19:27:01.441241  782266 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0920 19:27:01.441247  782266 command_runner.go:130] > # minutes.
	I0920 19:27:01.441251  782266 command_runner.go:130] > # stream_tls_cert = ""
	I0920 19:27:01.441262  782266 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0920 19:27:01.441273  782266 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0920 19:27:01.441280  782266 command_runner.go:130] > # stream_tls_key = ""
	I0920 19:27:01.441293  782266 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0920 19:27:01.441305  782266 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0920 19:27:01.441326  782266 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0920 19:27:01.441333  782266 command_runner.go:130] > # stream_tls_ca = ""
	I0920 19:27:01.441343  782266 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 19:27:01.441353  782266 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0920 19:27:01.441367  782266 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 19:27:01.441377  782266 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0920 19:27:01.441387  782266 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0920 19:27:01.441398  782266 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0920 19:27:01.441407  782266 command_runner.go:130] > [crio.runtime]
	I0920 19:27:01.441417  782266 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0920 19:27:01.441426  782266 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0920 19:27:01.441435  782266 command_runner.go:130] > # "nofile=1024:2048"
	I0920 19:27:01.441447  782266 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0920 19:27:01.441458  782266 command_runner.go:130] > # default_ulimits = [
	I0920 19:27:01.441467  782266 command_runner.go:130] > # ]
	I0920 19:27:01.441479  782266 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0920 19:27:01.441487  782266 command_runner.go:130] > # no_pivot = false
	I0920 19:27:01.441499  782266 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0920 19:27:01.441508  782266 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0920 19:27:01.441519  782266 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0920 19:27:01.441531  782266 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0920 19:27:01.441542  782266 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0920 19:27:01.441556  782266 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 19:27:01.441566  782266 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0920 19:27:01.441576  782266 command_runner.go:130] > # Cgroup setting for conmon
	I0920 19:27:01.441587  782266 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0920 19:27:01.441596  782266 command_runner.go:130] > conmon_cgroup = "pod"
	I0920 19:27:01.441609  782266 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0920 19:27:01.441620  782266 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0920 19:27:01.441633  782266 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 19:27:01.441641  782266 command_runner.go:130] > conmon_env = [
	I0920 19:27:01.441653  782266 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 19:27:01.441660  782266 command_runner.go:130] > ]
	I0920 19:27:01.441668  782266 command_runner.go:130] > # Additional environment variables to set for all the
	I0920 19:27:01.441675  782266 command_runner.go:130] > # containers. These are overridden if set in the
	I0920 19:27:01.441682  782266 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0920 19:27:01.441690  782266 command_runner.go:130] > # default_env = [
	I0920 19:27:01.441700  782266 command_runner.go:130] > # ]
	I0920 19:27:01.441711  782266 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0920 19:27:01.441725  782266 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0920 19:27:01.441734  782266 command_runner.go:130] > # selinux = false
	I0920 19:27:01.441746  782266 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0920 19:27:01.441756  782266 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0920 19:27:01.441765  782266 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0920 19:27:01.441775  782266 command_runner.go:130] > # seccomp_profile = ""
	I0920 19:27:01.441788  782266 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0920 19:27:01.441801  782266 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0920 19:27:01.441813  782266 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0920 19:27:01.441823  782266 command_runner.go:130] > # which might increase security.
	I0920 19:27:01.441833  782266 command_runner.go:130] > # This option is currently deprecated,
	I0920 19:27:01.441845  782266 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0920 19:27:01.441855  782266 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0920 19:27:01.441868  782266 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0920 19:27:01.441883  782266 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0920 19:27:01.441899  782266 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0920 19:27:01.441912  782266 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0920 19:27:01.441922  782266 command_runner.go:130] > # This option supports live configuration reload.
	I0920 19:27:01.441930  782266 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0920 19:27:01.441938  782266 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0920 19:27:01.441949  782266 command_runner.go:130] > # the cgroup blockio controller.
	I0920 19:27:01.441959  782266 command_runner.go:130] > # blockio_config_file = ""
	I0920 19:27:01.441969  782266 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0920 19:27:01.441978  782266 command_runner.go:130] > # blockio parameters.
	I0920 19:27:01.441987  782266 command_runner.go:130] > # blockio_reload = false
	I0920 19:27:01.441999  782266 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0920 19:27:01.442008  782266 command_runner.go:130] > # irqbalance daemon.
	I0920 19:27:01.442015  782266 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0920 19:27:01.442024  782266 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0920 19:27:01.442037  782266 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0920 19:27:01.442050  782266 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0920 19:27:01.442062  782266 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0920 19:27:01.442074  782266 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0920 19:27:01.442085  782266 command_runner.go:130] > # This option supports live configuration reload.
	I0920 19:27:01.442094  782266 command_runner.go:130] > # rdt_config_file = ""
	I0920 19:27:01.442102  782266 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0920 19:27:01.442107  782266 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0920 19:27:01.442168  782266 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0920 19:27:01.442183  782266 command_runner.go:130] > # separate_pull_cgroup = ""
	I0920 19:27:01.442189  782266 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0920 19:27:01.442205  782266 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0920 19:27:01.442215  782266 command_runner.go:130] > # will be added.
	I0920 19:27:01.442224  782266 command_runner.go:130] > # default_capabilities = [
	I0920 19:27:01.442233  782266 command_runner.go:130] > # 	"CHOWN",
	I0920 19:27:01.442242  782266 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0920 19:27:01.442250  782266 command_runner.go:130] > # 	"FSETID",
	I0920 19:27:01.442256  782266 command_runner.go:130] > # 	"FOWNER",
	I0920 19:27:01.442264  782266 command_runner.go:130] > # 	"SETGID",
	I0920 19:27:01.442269  782266 command_runner.go:130] > # 	"SETUID",
	I0920 19:27:01.442274  782266 command_runner.go:130] > # 	"SETPCAP",
	I0920 19:27:01.442279  782266 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0920 19:27:01.442286  782266 command_runner.go:130] > # 	"KILL",
	I0920 19:27:01.442294  782266 command_runner.go:130] > # ]
	I0920 19:27:01.442309  782266 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0920 19:27:01.442324  782266 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0920 19:27:01.442344  782266 command_runner.go:130] > # add_inheritable_capabilities = false
	I0920 19:27:01.442355  782266 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0920 19:27:01.442364  782266 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 19:27:01.442373  782266 command_runner.go:130] > default_sysctls = [
	I0920 19:27:01.442382  782266 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0920 19:27:01.442390  782266 command_runner.go:130] > ]
	I0920 19:27:01.442401  782266 command_runner.go:130] > # List of devices on the host that a
	I0920 19:27:01.442413  782266 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0920 19:27:01.442422  782266 command_runner.go:130] > # allowed_devices = [
	I0920 19:27:01.442431  782266 command_runner.go:130] > # 	"/dev/fuse",
	I0920 19:27:01.442437  782266 command_runner.go:130] > # ]
	I0920 19:27:01.442442  782266 command_runner.go:130] > # List of additional devices. specified as
	I0920 19:27:01.442454  782266 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0920 19:27:01.442466  782266 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0920 19:27:01.442477  782266 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 19:27:01.442487  782266 command_runner.go:130] > # additional_devices = [
	I0920 19:27:01.442495  782266 command_runner.go:130] > # ]
	I0920 19:27:01.442505  782266 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0920 19:27:01.442520  782266 command_runner.go:130] > # cdi_spec_dirs = [
	I0920 19:27:01.442527  782266 command_runner.go:130] > # 	"/etc/cdi",
	I0920 19:27:01.442531  782266 command_runner.go:130] > # 	"/var/run/cdi",
	I0920 19:27:01.442538  782266 command_runner.go:130] > # ]
	I0920 19:27:01.442548  782266 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0920 19:27:01.442561  782266 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0920 19:27:01.442570  782266 command_runner.go:130] > # Defaults to false.
	I0920 19:27:01.442581  782266 command_runner.go:130] > # device_ownership_from_security_context = false
	I0920 19:27:01.442593  782266 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0920 19:27:01.442605  782266 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0920 19:27:01.442611  782266 command_runner.go:130] > # hooks_dir = [
	I0920 19:27:01.442616  782266 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0920 19:27:01.442620  782266 command_runner.go:130] > # ]
	I0920 19:27:01.442633  782266 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0920 19:27:01.442648  782266 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0920 19:27:01.442659  782266 command_runner.go:130] > # its default mounts from the following two files:
	I0920 19:27:01.442667  782266 command_runner.go:130] > #
	I0920 19:27:01.442678  782266 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0920 19:27:01.442692  782266 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0920 19:27:01.442700  782266 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0920 19:27:01.442703  782266 command_runner.go:130] > #
	I0920 19:27:01.442713  782266 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0920 19:27:01.442726  782266 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0920 19:27:01.442739  782266 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0920 19:27:01.442753  782266 command_runner.go:130] > #      only add mounts it finds in this file.
	I0920 19:27:01.442761  782266 command_runner.go:130] > #
	I0920 19:27:01.442768  782266 command_runner.go:130] > # default_mounts_file = ""
	I0920 19:27:01.442778  782266 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0920 19:27:01.442786  782266 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0920 19:27:01.442792  782266 command_runner.go:130] > pids_limit = 1024
	I0920 19:27:01.442805  782266 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0920 19:27:01.442817  782266 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0920 19:27:01.442830  782266 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0920 19:27:01.442870  782266 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0920 19:27:01.442880  782266 command_runner.go:130] > # log_size_max = -1
	I0920 19:27:01.442891  782266 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0920 19:27:01.442900  782266 command_runner.go:130] > # log_to_journald = false
	I0920 19:27:01.442912  782266 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0920 19:27:01.442919  782266 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0920 19:27:01.442925  782266 command_runner.go:130] > # Path to directory for container attach sockets.
	I0920 19:27:01.442934  782266 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0920 19:27:01.442945  782266 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0920 19:27:01.442955  782266 command_runner.go:130] > # bind_mount_prefix = ""
	I0920 19:27:01.442966  782266 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0920 19:27:01.442980  782266 command_runner.go:130] > # read_only = false
	I0920 19:27:01.442992  782266 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0920 19:27:01.443003  782266 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0920 19:27:01.443009  782266 command_runner.go:130] > # live configuration reload.
	I0920 19:27:01.443015  782266 command_runner.go:130] > # log_level = "info"
	I0920 19:27:01.443027  782266 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0920 19:27:01.443039  782266 command_runner.go:130] > # This option supports live configuration reload.
	I0920 19:27:01.443048  782266 command_runner.go:130] > # log_filter = ""
	I0920 19:27:01.443057  782266 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0920 19:27:01.443071  782266 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0920 19:27:01.443080  782266 command_runner.go:130] > # separated by comma.
	I0920 19:27:01.443091  782266 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 19:27:01.443097  782266 command_runner.go:130] > # uid_mappings = ""
	I0920 19:27:01.443106  782266 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0920 19:27:01.443119  782266 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0920 19:27:01.443128  782266 command_runner.go:130] > # separated by comma.
	I0920 19:27:01.443144  782266 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 19:27:01.443156  782266 command_runner.go:130] > # gid_mappings = ""
	I0920 19:27:01.443169  782266 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0920 19:27:01.443178  782266 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 19:27:01.443189  782266 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 19:27:01.443204  782266 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 19:27:01.443229  782266 command_runner.go:130] > # minimum_mappable_uid = -1
	I0920 19:27:01.443242  782266 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0920 19:27:01.443254  782266 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 19:27:01.443262  782266 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 19:27:01.443274  782266 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 19:27:01.443285  782266 command_runner.go:130] > # minimum_mappable_gid = -1
	I0920 19:27:01.443297  782266 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0920 19:27:01.443309  782266 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0920 19:27:01.443320  782266 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0920 19:27:01.443330  782266 command_runner.go:130] > # ctr_stop_timeout = 30
	I0920 19:27:01.443341  782266 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0920 19:27:01.443349  782266 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0920 19:27:01.443359  782266 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0920 19:27:01.443370  782266 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0920 19:27:01.443379  782266 command_runner.go:130] > drop_infra_ctr = false
	I0920 19:27:01.443390  782266 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0920 19:27:01.443401  782266 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0920 19:27:01.443414  782266 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0920 19:27:01.443423  782266 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0920 19:27:01.443433  782266 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0920 19:27:01.443444  782266 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0920 19:27:01.443455  782266 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0920 19:27:01.443466  782266 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0920 19:27:01.443475  782266 command_runner.go:130] > # shared_cpuset = ""
	I0920 19:27:01.443485  782266 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0920 19:27:01.443496  782266 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0920 19:27:01.443505  782266 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0920 19:27:01.443516  782266 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0920 19:27:01.443522  782266 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0920 19:27:01.443530  782266 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0920 19:27:01.443546  782266 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0920 19:27:01.443555  782266 command_runner.go:130] > # enable_criu_support = false
	I0920 19:27:01.443566  782266 command_runner.go:130] > # Enable/disable the generation of the container,
	I0920 19:27:01.443579  782266 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0920 19:27:01.443589  782266 command_runner.go:130] > # enable_pod_events = false
	I0920 19:27:01.443600  782266 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 19:27:01.443608  782266 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 19:27:01.443616  782266 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0920 19:27:01.443626  782266 command_runner.go:130] > # default_runtime = "runc"
	I0920 19:27:01.443637  782266 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0920 19:27:01.443652  782266 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0920 19:27:01.443669  782266 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0920 19:27:01.443680  782266 command_runner.go:130] > # creation as a file is not desired either.
	I0920 19:27:01.443691  782266 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0920 19:27:01.443700  782266 command_runner.go:130] > # the hostname is being managed dynamically.
	I0920 19:27:01.443711  782266 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0920 19:27:01.443720  782266 command_runner.go:130] > # ]
	I0920 19:27:01.443730  782266 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0920 19:27:01.443742  782266 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0920 19:27:01.443755  782266 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0920 19:27:01.443765  782266 command_runner.go:130] > # Each entry in the table should follow the format:
	I0920 19:27:01.443771  782266 command_runner.go:130] > #
	I0920 19:27:01.443776  782266 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0920 19:27:01.443785  782266 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0920 19:27:01.443816  782266 command_runner.go:130] > # runtime_type = "oci"
	I0920 19:27:01.443826  782266 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0920 19:27:01.443838  782266 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0920 19:27:01.443848  782266 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0920 19:27:01.443856  782266 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0920 19:27:01.443861  782266 command_runner.go:130] > # monitor_env = []
	I0920 19:27:01.443868  782266 command_runner.go:130] > # privileged_without_host_devices = false
	I0920 19:27:01.443877  782266 command_runner.go:130] > # allowed_annotations = []
	I0920 19:27:01.443889  782266 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0920 19:27:01.443897  782266 command_runner.go:130] > # Where:
	I0920 19:27:01.443906  782266 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0920 19:27:01.443918  782266 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0920 19:27:01.443932  782266 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0920 19:27:01.443943  782266 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0920 19:27:01.443953  782266 command_runner.go:130] > #   in $PATH.
	I0920 19:27:01.443968  782266 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0920 19:27:01.443979  782266 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0920 19:27:01.443992  782266 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0920 19:27:01.444001  782266 command_runner.go:130] > #   state.
	I0920 19:27:01.444011  782266 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0920 19:27:01.444023  782266 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0920 19:27:01.444031  782266 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0920 19:27:01.444042  782266 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0920 19:27:01.444055  782266 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0920 19:27:01.444069  782266 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0920 19:27:01.444078  782266 command_runner.go:130] > #   The currently recognized values are:
	I0920 19:27:01.444091  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0920 19:27:01.444105  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0920 19:27:01.444114  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0920 19:27:01.444121  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0920 19:27:01.444134  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0920 19:27:01.444147  782266 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0920 19:27:01.444159  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0920 19:27:01.444172  782266 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0920 19:27:01.444184  782266 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0920 19:27:01.444195  782266 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0920 19:27:01.444202  782266 command_runner.go:130] > #   deprecated option "conmon".
	I0920 19:27:01.444212  782266 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0920 19:27:01.444223  782266 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0920 19:27:01.444239  782266 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0920 19:27:01.444250  782266 command_runner.go:130] > #   should be moved to the container's cgroup
	I0920 19:27:01.444263  782266 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0920 19:27:01.444273  782266 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0920 19:27:01.444283  782266 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0920 19:27:01.444292  782266 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0920 19:27:01.444301  782266 command_runner.go:130] > #
	I0920 19:27:01.444312  782266 command_runner.go:130] > # Using the seccomp notifier feature:
	I0920 19:27:01.444322  782266 command_runner.go:130] > #
	I0920 19:27:01.444334  782266 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0920 19:27:01.444346  782266 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0920 19:27:01.444354  782266 command_runner.go:130] > #
	I0920 19:27:01.444363  782266 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0920 19:27:01.444372  782266 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0920 19:27:01.444377  782266 command_runner.go:130] > #
	I0920 19:27:01.444388  782266 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0920 19:27:01.444397  782266 command_runner.go:130] > # feature.
	I0920 19:27:01.444404  782266 command_runner.go:130] > #
	I0920 19:27:01.444414  782266 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0920 19:27:01.444426  782266 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0920 19:27:01.444438  782266 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0920 19:27:01.444450  782266 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0920 19:27:01.444458  782266 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0920 19:27:01.444465  782266 command_runner.go:130] > #
	I0920 19:27:01.444478  782266 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0920 19:27:01.444490  782266 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0920 19:27:01.444497  782266 command_runner.go:130] > #
	I0920 19:27:01.444507  782266 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0920 19:27:01.444519  782266 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0920 19:27:01.444526  782266 command_runner.go:130] > #
	I0920 19:27:01.444536  782266 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0920 19:27:01.444544  782266 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0920 19:27:01.444552  782266 command_runner.go:130] > # limitation.
	I0920 19:27:01.444563  782266 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0920 19:27:01.444574  782266 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0920 19:27:01.444582  782266 command_runner.go:130] > runtime_type = "oci"
	I0920 19:27:01.444592  782266 command_runner.go:130] > runtime_root = "/run/runc"
	I0920 19:27:01.444601  782266 command_runner.go:130] > runtime_config_path = ""
	I0920 19:27:01.444609  782266 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0920 19:27:01.444622  782266 command_runner.go:130] > monitor_cgroup = "pod"
	I0920 19:27:01.444628  782266 command_runner.go:130] > monitor_exec_cgroup = ""
	I0920 19:27:01.444634  782266 command_runner.go:130] > monitor_env = [
	I0920 19:27:01.444646  782266 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 19:27:01.444654  782266 command_runner.go:130] > ]
	I0920 19:27:01.444664  782266 command_runner.go:130] > privileged_without_host_devices = false
	I0920 19:27:01.444676  782266 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0920 19:27:01.444687  782266 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0920 19:27:01.444698  782266 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0920 19:27:01.444709  782266 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0920 19:27:01.444726  782266 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0920 19:27:01.444739  782266 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0920 19:27:01.444755  782266 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0920 19:27:01.444771  782266 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0920 19:27:01.444782  782266 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0920 19:27:01.444794  782266 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0920 19:27:01.444800  782266 command_runner.go:130] > # Example:
	I0920 19:27:01.444807  782266 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0920 19:27:01.444818  782266 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0920 19:27:01.444830  782266 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0920 19:27:01.444844  782266 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0920 19:27:01.444852  782266 command_runner.go:130] > # cpuset = 0
	I0920 19:27:01.444858  782266 command_runner.go:130] > # cpushares = "0-1"
	I0920 19:27:01.444866  782266 command_runner.go:130] > # Where:
	I0920 19:27:01.444875  782266 command_runner.go:130] > # The workload name is workload-type.
	I0920 19:27:01.444884  782266 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0920 19:27:01.444891  782266 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0920 19:27:01.444897  782266 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0920 19:27:01.444906  782266 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0920 19:27:01.444917  782266 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0920 19:27:01.444929  782266 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0920 19:27:01.444942  782266 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0920 19:27:01.444952  782266 command_runner.go:130] > # Default value is set to true
	I0920 19:27:01.444962  782266 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0920 19:27:01.444973  782266 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0920 19:27:01.444983  782266 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0920 19:27:01.444989  782266 command_runner.go:130] > # Default value is set to 'false'
	I0920 19:27:01.444993  782266 command_runner.go:130] > # disable_hostport_mapping = false
	I0920 19:27:01.445000  782266 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0920 19:27:01.445002  782266 command_runner.go:130] > #
	I0920 19:27:01.445008  782266 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0920 19:27:01.445013  782266 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0920 19:27:01.445020  782266 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0920 19:27:01.445026  782266 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0920 19:27:01.445033  782266 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0920 19:27:01.445037  782266 command_runner.go:130] > [crio.image]
	I0920 19:27:01.445042  782266 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0920 19:27:01.445046  782266 command_runner.go:130] > # default_transport = "docker://"
	I0920 19:27:01.445051  782266 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0920 19:27:01.445060  782266 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0920 19:27:01.445068  782266 command_runner.go:130] > # global_auth_file = ""
	I0920 19:27:01.445077  782266 command_runner.go:130] > # The image used to instantiate infra containers.
	I0920 19:27:01.445085  782266 command_runner.go:130] > # This option supports live configuration reload.
	I0920 19:27:01.445092  782266 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0920 19:27:01.445102  782266 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0920 19:27:01.445110  782266 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0920 19:27:01.445118  782266 command_runner.go:130] > # This option supports live configuration reload.
	I0920 19:27:01.445123  782266 command_runner.go:130] > # pause_image_auth_file = ""
	I0920 19:27:01.445131  782266 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0920 19:27:01.445137  782266 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0920 19:27:01.445143  782266 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0920 19:27:01.445148  782266 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0920 19:27:01.445152  782266 command_runner.go:130] > # pause_command = "/pause"
	I0920 19:27:01.445158  782266 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0920 19:27:01.445163  782266 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0920 19:27:01.445168  782266 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0920 19:27:01.445177  782266 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0920 19:27:01.445184  782266 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0920 19:27:01.445190  782266 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0920 19:27:01.445193  782266 command_runner.go:130] > # pinned_images = [
	I0920 19:27:01.445196  782266 command_runner.go:130] > # ]
	I0920 19:27:01.445202  782266 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0920 19:27:01.445211  782266 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0920 19:27:01.445217  782266 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0920 19:27:01.445224  782266 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0920 19:27:01.445230  782266 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0920 19:27:01.445236  782266 command_runner.go:130] > # signature_policy = ""
	I0920 19:27:01.445242  782266 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0920 19:27:01.445250  782266 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0920 19:27:01.445256  782266 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0920 19:27:01.445267  782266 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0920 19:27:01.445274  782266 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0920 19:27:01.445279  782266 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0920 19:27:01.445291  782266 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0920 19:27:01.445303  782266 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0920 19:27:01.445310  782266 command_runner.go:130] > # changing them here.
	I0920 19:27:01.445314  782266 command_runner.go:130] > # insecure_registries = [
	I0920 19:27:01.445319  782266 command_runner.go:130] > # ]
	I0920 19:27:01.445325  782266 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0920 19:27:01.445333  782266 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0920 19:27:01.445337  782266 command_runner.go:130] > # image_volumes = "mkdir"
	I0920 19:27:01.445344  782266 command_runner.go:130] > # Temporary directory to use for storing big files
	I0920 19:27:01.445348  782266 command_runner.go:130] > # big_files_temporary_dir = ""
	I0920 19:27:01.445356  782266 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0920 19:27:01.445362  782266 command_runner.go:130] > # CNI plugins.
	I0920 19:27:01.445366  782266 command_runner.go:130] > [crio.network]
	I0920 19:27:01.445374  782266 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0920 19:27:01.445381  782266 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0920 19:27:01.445385  782266 command_runner.go:130] > # cni_default_network = ""
	I0920 19:27:01.445396  782266 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0920 19:27:01.445402  782266 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0920 19:27:01.445408  782266 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0920 19:27:01.445414  782266 command_runner.go:130] > # plugin_dirs = [
	I0920 19:27:01.445418  782266 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0920 19:27:01.445424  782266 command_runner.go:130] > # ]
	I0920 19:27:01.445430  782266 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0920 19:27:01.445435  782266 command_runner.go:130] > [crio.metrics]
	I0920 19:27:01.445450  782266 command_runner.go:130] > # Globally enable or disable metrics support.
	I0920 19:27:01.445456  782266 command_runner.go:130] > enable_metrics = true
	I0920 19:27:01.445461  782266 command_runner.go:130] > # Specify enabled metrics collectors.
	I0920 19:27:01.445470  782266 command_runner.go:130] > # Per default all metrics are enabled.
	I0920 19:27:01.445476  782266 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0920 19:27:01.445484  782266 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0920 19:27:01.445492  782266 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0920 19:27:01.445496  782266 command_runner.go:130] > # metrics_collectors = [
	I0920 19:27:01.445502  782266 command_runner.go:130] > # 	"operations",
	I0920 19:27:01.445507  782266 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0920 19:27:01.445513  782266 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0920 19:27:01.445517  782266 command_runner.go:130] > # 	"operations_errors",
	I0920 19:27:01.445523  782266 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0920 19:27:01.445527  782266 command_runner.go:130] > # 	"image_pulls_by_name",
	I0920 19:27:01.445533  782266 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0920 19:27:01.445541  782266 command_runner.go:130] > # 	"image_pulls_failures",
	I0920 19:27:01.445547  782266 command_runner.go:130] > # 	"image_pulls_successes",
	I0920 19:27:01.445552  782266 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0920 19:27:01.445557  782266 command_runner.go:130] > # 	"image_layer_reuse",
	I0920 19:27:01.445562  782266 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0920 19:27:01.445568  782266 command_runner.go:130] > # 	"containers_oom_total",
	I0920 19:27:01.445572  782266 command_runner.go:130] > # 	"containers_oom",
	I0920 19:27:01.445578  782266 command_runner.go:130] > # 	"processes_defunct",
	I0920 19:27:01.445582  782266 command_runner.go:130] > # 	"operations_total",
	I0920 19:27:01.445590  782266 command_runner.go:130] > # 	"operations_latency_seconds",
	I0920 19:27:01.445596  782266 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0920 19:27:01.445602  782266 command_runner.go:130] > # 	"operations_errors_total",
	I0920 19:27:01.445607  782266 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0920 19:27:01.445611  782266 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0920 19:27:01.445616  782266 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0920 19:27:01.445620  782266 command_runner.go:130] > # 	"image_pulls_success_total",
	I0920 19:27:01.445626  782266 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0920 19:27:01.445631  782266 command_runner.go:130] > # 	"containers_oom_count_total",
	I0920 19:27:01.445637  782266 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0920 19:27:01.445642  782266 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0920 19:27:01.445647  782266 command_runner.go:130] > # ]
	I0920 19:27:01.445652  782266 command_runner.go:130] > # The port on which the metrics server will listen.
	I0920 19:27:01.445658  782266 command_runner.go:130] > # metrics_port = 9090
	I0920 19:27:01.445663  782266 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0920 19:27:01.445671  782266 command_runner.go:130] > # metrics_socket = ""
	I0920 19:27:01.445681  782266 command_runner.go:130] > # The certificate for the secure metrics server.
	I0920 19:27:01.445693  782266 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0920 19:27:01.445702  782266 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0920 19:27:01.445709  782266 command_runner.go:130] > # certificate on any modification event.
	I0920 19:27:01.445713  782266 command_runner.go:130] > # metrics_cert = ""
	I0920 19:27:01.445720  782266 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0920 19:27:01.445725  782266 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0920 19:27:01.445731  782266 command_runner.go:130] > # metrics_key = ""
	I0920 19:27:01.445737  782266 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0920 19:27:01.445743  782266 command_runner.go:130] > [crio.tracing]
	I0920 19:27:01.445749  782266 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0920 19:27:01.445755  782266 command_runner.go:130] > # enable_tracing = false
	I0920 19:27:01.445760  782266 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0920 19:27:01.445767  782266 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0920 19:27:01.445773  782266 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0920 19:27:01.445780  782266 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0920 19:27:01.445784  782266 command_runner.go:130] > # CRI-O NRI configuration.
	I0920 19:27:01.445789  782266 command_runner.go:130] > [crio.nri]
	I0920 19:27:01.445795  782266 command_runner.go:130] > # Globally enable or disable NRI.
	I0920 19:27:01.445801  782266 command_runner.go:130] > # enable_nri = false
	I0920 19:27:01.445808  782266 command_runner.go:130] > # NRI socket to listen on.
	I0920 19:27:01.445814  782266 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0920 19:27:01.445819  782266 command_runner.go:130] > # NRI plugin directory to use.
	I0920 19:27:01.445825  782266 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0920 19:27:01.445830  782266 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0920 19:27:01.445840  782266 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0920 19:27:01.445847  782266 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0920 19:27:01.445851  782266 command_runner.go:130] > # nri_disable_connections = false
	I0920 19:27:01.445858  782266 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0920 19:27:01.445862  782266 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0920 19:27:01.445867  782266 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0920 19:27:01.445874  782266 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0920 19:27:01.445881  782266 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0920 19:27:01.445887  782266 command_runner.go:130] > [crio.stats]
	I0920 19:27:01.445893  782266 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0920 19:27:01.445900  782266 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0920 19:27:01.445905  782266 command_runner.go:130] > # stats_collection_period = 0
	I0920 19:27:01.445987  782266 cni.go:84] Creating CNI manager for ""
	I0920 19:27:01.445999  782266 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 19:27:01.446009  782266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:27:01.446031  782266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-756894 NodeName:multinode-756894 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:27:01.446157  782266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-756894"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:27:01.446225  782266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:27:01.456797  782266 command_runner.go:130] > kubeadm
	I0920 19:27:01.456819  782266 command_runner.go:130] > kubectl
	I0920 19:27:01.456825  782266 command_runner.go:130] > kubelet
	I0920 19:27:01.456878  782266 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:27:01.456937  782266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:27:01.466503  782266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 19:27:01.483431  782266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:27:01.499900  782266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0920 19:27:01.516337  782266 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I0920 19:27:01.520227  782266 command_runner.go:130] > 192.168.39.168	control-plane.minikube.internal
	I0920 19:27:01.520302  782266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:27:01.659368  782266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:27:01.673938  782266 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894 for IP: 192.168.39.168
	I0920 19:27:01.673973  782266 certs.go:194] generating shared ca certs ...
	I0920 19:27:01.674002  782266 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:27:01.674214  782266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 19:27:01.674264  782266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 19:27:01.674276  782266 certs.go:256] generating profile certs ...
	I0920 19:27:01.674387  782266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/client.key
	I0920 19:27:01.674533  782266 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/apiserver.key.f88761e5
	I0920 19:27:01.674576  782266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/proxy-client.key
	I0920 19:27:01.674588  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 19:27:01.674610  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 19:27:01.674623  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 19:27:01.674638  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 19:27:01.674650  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 19:27:01.674664  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 19:27:01.674674  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 19:27:01.674687  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 19:27:01.674741  782266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 19:27:01.674771  782266 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 19:27:01.674781  782266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:27:01.674803  782266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:27:01.674825  782266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:27:01.674864  782266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 19:27:01.674911  782266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 19:27:01.674939  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> /usr/share/ca-certificates/7484972.pem
	I0920 19:27:01.674952  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:27:01.674964  782266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem -> /usr/share/ca-certificates/748497.pem
	I0920 19:27:01.675602  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:27:01.700159  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:27:01.724109  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:27:01.747820  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:27:01.771488  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 19:27:01.796401  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:27:01.820944  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:27:01.844404  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/multinode-756894/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:27:01.868179  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 19:27:01.891319  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:27:01.915784  782266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 19:27:01.941340  782266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:27:01.960152  782266 ssh_runner.go:195] Run: openssl version
	I0920 19:27:01.966223  782266 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0920 19:27:01.966294  782266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 19:27:01.979574  782266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 19:27:01.984071  782266 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 19:27:01.984242  782266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 19:27:01.984291  782266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 19:27:01.989937  782266 command_runner.go:130] > 3ec20f2e
	I0920 19:27:01.990020  782266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:27:02.000632  782266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:27:02.036176  782266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:27:02.040768  782266 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:27:02.040811  782266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:27:02.040864  782266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:27:02.046715  782266 command_runner.go:130] > b5213941
	I0920 19:27:02.046797  782266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:27:02.056796  782266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 19:27:02.068180  782266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 19:27:02.072678  782266 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 19:27:02.072716  782266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 19:27:02.072761  782266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 19:27:02.078283  782266 command_runner.go:130] > 51391683
	I0920 19:27:02.078442  782266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 19:27:02.090190  782266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:27:02.095529  782266 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:27:02.095567  782266 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0920 19:27:02.095577  782266 command_runner.go:130] > Device: 253,1	Inode: 3148840     Links: 1
	I0920 19:27:02.095596  782266 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 19:27:02.095609  782266 command_runner.go:130] > Access: 2024-09-20 19:20:12.559886041 +0000
	I0920 19:27:02.095621  782266 command_runner.go:130] > Modify: 2024-09-20 19:20:12.559886041 +0000
	I0920 19:27:02.095633  782266 command_runner.go:130] > Change: 2024-09-20 19:20:12.559886041 +0000
	I0920 19:27:02.095644  782266 command_runner.go:130] >  Birth: 2024-09-20 19:20:12.559886041 +0000
	I0920 19:27:02.095708  782266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:27:02.101387  782266 command_runner.go:130] > Certificate will not expire
	I0920 19:27:02.101745  782266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:27:02.107407  782266 command_runner.go:130] > Certificate will not expire
	I0920 19:27:02.107708  782266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:27:02.113969  782266 command_runner.go:130] > Certificate will not expire
	I0920 19:27:02.114138  782266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:27:02.119743  782266 command_runner.go:130] > Certificate will not expire
	I0920 19:27:02.119828  782266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:27:02.125383  782266 command_runner.go:130] > Certificate will not expire
	I0920 19:27:02.125437  782266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:27:02.130814  782266 command_runner.go:130] > Certificate will not expire
	I0920 19:27:02.130954  782266 kubeadm.go:392] StartCluster: {Name:multinode-756894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-756894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:27:02.131080  782266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:27:02.131136  782266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:27:02.165194  782266 command_runner.go:130] > 72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa
	I0920 19:27:02.165224  782266 command_runner.go:130] > 28022afd37d6d2f008deeded66b17c0a72113eb2ad5fa6907f1036e18f1d975f
	I0920 19:27:02.165231  782266 command_runner.go:130] > fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6
	I0920 19:27:02.165237  782266 command_runner.go:130] > 12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414
	I0920 19:27:02.165242  782266 command_runner.go:130] > 4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42
	I0920 19:27:02.165247  782266 command_runner.go:130] > fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8
	I0920 19:27:02.165252  782266 command_runner.go:130] > 9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21
	I0920 19:27:02.165258  782266 command_runner.go:130] > 23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b
	I0920 19:27:02.166670  782266 cri.go:89] found id: "72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa"
	I0920 19:27:02.166689  782266 cri.go:89] found id: "28022afd37d6d2f008deeded66b17c0a72113eb2ad5fa6907f1036e18f1d975f"
	I0920 19:27:02.166694  782266 cri.go:89] found id: "fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6"
	I0920 19:27:02.166698  782266 cri.go:89] found id: "12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414"
	I0920 19:27:02.166703  782266 cri.go:89] found id: "4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42"
	I0920 19:27:02.166715  782266 cri.go:89] found id: "fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8"
	I0920 19:27:02.166719  782266 cri.go:89] found id: "9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21"
	I0920 19:27:02.166723  782266 cri.go:89] found id: "23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b"
	I0920 19:27:02.166727  782266 cri.go:89] found id: ""
	I0920 19:27:02.166783  782266 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.568293791Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860672568271950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7911e502-01e2-42f0-9041-baf6989a41d6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.568945420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7dbe5b67-8526-42dc-9047-83a8d7c3e5d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.569025449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7dbe5b67-8526-42dc-9047-83a8d7c3e5d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.569364337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1032aae897c6a6e789586471840775b35b66bd755e2ea7221f6c2a7e9d01023f,PodSandboxId:78021bafe3d20c913f62fb213864b4120ec395f0f5f378ad7cbecfa5d6cc413b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726860462507047601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0475dc410c9bd2962c731c4869948eccbd23017afbe2bf565ebbd87fdeb2bf23,PodSandboxId:76d754085d774cda47005a0433633d32fe881e18fec2cb9bd995aef50fa7d786,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726860428957320778,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31c03c3121e36f8b680ffeb04be136d13ed0eb30cc4232945f16828840b7fcf,PodSandboxId:4e099d73b22682c0a5a5a1dde1626a039ae8d2d98ffc894094e53b18de416a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726860428727515994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b1326c5d456cb895967959e46bbd4ccd2743f8eebbf4ea2920d93996b086c5,PodSandboxId:38808a33eadabf60b3bf793daee35e8b017adba19989f83c8c108d030de0a94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726860428654175913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29-6f2c3fcc7e35,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60955aaf5949a780a0a25c5166eda54b8cfa0b6459fc2943c7962b3d0b2b479c,PodSandboxId:9f438bf46a92e7ffab993bc69d2ce1ab0a773e378252bf0e2ff344aa0a2c1065,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726860428685619092,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9bd3ed7c925e80838dbb28a891c683435a9d040feae22c3e18aa8e7273cc15,PodSandboxId:77bd787f41e6c6feee7f8596005b33a7314f2999fd11550c663b84f758a289d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726860424851395385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5e65f8c20a2386e254f6706107d0cc7f75f834b088e05d4f04adfc74984a60,PodSandboxId:5fd31bbab35fed3fb22d7232ee57a5e50103639d591865ca5cfe6e2c2cccef57,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726860424814308052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77a7e1a224a7e290b56fbe2966968e539e4d6246462134099cf5e363b83dffb,PodSandboxId:ac9232fb23f537a32cd6724d4f0aa7417b505d67a73444703a1614fd39c88ec8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726860424831160293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6c2d47645b3c320308300d42f92d21313904e5f30c2e0843aaa7c588014c301,PodSandboxId:ed88405164d64c4b8218fab6d9abcde896bfb41cb5c836aed409402e9a0f4b6e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726860424763351235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8dcd882059b0fe1be82143b183508ab50d5688d42f4fd8d917bda34237eba96,PodSandboxId:5a505c52e820cb170c44f822bb0f5662b81a1567befe1e0296e6e9c359b4a0cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726860095699891078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa,PodSandboxId:983db580543713ff6b79c9d54e5d3d700566e8c06206f15b4fd8278675f97229,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726860038958952580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28022afd37d6d2f008deeded66b17c0a72113eb2ad5fa6907f1036e18f1d975f,PodSandboxId:484bc97168c87403e6d624f33c30eae4932c92e413d6fc360ca0cde0661bbbe1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726860038935052890,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6,PodSandboxId:212c62c39933f2f28d2f41ea89dd26b2c4b4c9f9ceaa334e43d1d4c8fdabac3c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726860027205042203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414,PodSandboxId:2409beaba30e5726b8d9b5f1b0e2faac96a8251d15d45802ade6f092deab1ef7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726860026953549127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29
-6f2c3fcc7e35,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42,PodSandboxId:5664e1f96c4c0c218278727eeed301014e9b3a0687a3db76fc8af725e4555183,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726860015825323175,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21,PodSandboxId:04efda436d7240785cb8c0b48a1928dfbc3b03ffd216666b108f27cbfbadedb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726860015795461437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8,PodSandboxId:96e2d4cf2c1fbe2fc2b1bd48e31218fd3dabf1869233ade215a1dea2d0c9b7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726860015822873372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b,PodSandboxId:d22dc1cbb45f98df2dd239baaa1834cf1d79630d7011a5acc083024c76958a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726860015790876301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7dbe5b67-8526-42dc-9047-83a8d7c3e5d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.608786032Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31b3c778-849e-4d19-8a11-ae1675b76f8b name=/runtime.v1.RuntimeService/Version
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.608867659Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31b3c778-849e-4d19-8a11-ae1675b76f8b name=/runtime.v1.RuntimeService/Version
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.609843440Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8211123d-5f91-44f1-88ff-c173bd110116 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.610242654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860672610219586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8211123d-5f91-44f1-88ff-c173bd110116 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.610642625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=930af304-9117-4b80-99de-75ffb3ea593f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.610743921Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=930af304-9117-4b80-99de-75ffb3ea593f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.611111849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1032aae897c6a6e789586471840775b35b66bd755e2ea7221f6c2a7e9d01023f,PodSandboxId:78021bafe3d20c913f62fb213864b4120ec395f0f5f378ad7cbecfa5d6cc413b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726860462507047601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0475dc410c9bd2962c731c4869948eccbd23017afbe2bf565ebbd87fdeb2bf23,PodSandboxId:76d754085d774cda47005a0433633d32fe881e18fec2cb9bd995aef50fa7d786,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726860428957320778,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31c03c3121e36f8b680ffeb04be136d13ed0eb30cc4232945f16828840b7fcf,PodSandboxId:4e099d73b22682c0a5a5a1dde1626a039ae8d2d98ffc894094e53b18de416a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726860428727515994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b1326c5d456cb895967959e46bbd4ccd2743f8eebbf4ea2920d93996b086c5,PodSandboxId:38808a33eadabf60b3bf793daee35e8b017adba19989f83c8c108d030de0a94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726860428654175913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29-6f2c3fcc7e35,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60955aaf5949a780a0a25c5166eda54b8cfa0b6459fc2943c7962b3d0b2b479c,PodSandboxId:9f438bf46a92e7ffab993bc69d2ce1ab0a773e378252bf0e2ff344aa0a2c1065,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726860428685619092,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9bd3ed7c925e80838dbb28a891c683435a9d040feae22c3e18aa8e7273cc15,PodSandboxId:77bd787f41e6c6feee7f8596005b33a7314f2999fd11550c663b84f758a289d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726860424851395385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5e65f8c20a2386e254f6706107d0cc7f75f834b088e05d4f04adfc74984a60,PodSandboxId:5fd31bbab35fed3fb22d7232ee57a5e50103639d591865ca5cfe6e2c2cccef57,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726860424814308052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77a7e1a224a7e290b56fbe2966968e539e4d6246462134099cf5e363b83dffb,PodSandboxId:ac9232fb23f537a32cd6724d4f0aa7417b505d67a73444703a1614fd39c88ec8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726860424831160293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6c2d47645b3c320308300d42f92d21313904e5f30c2e0843aaa7c588014c301,PodSandboxId:ed88405164d64c4b8218fab6d9abcde896bfb41cb5c836aed409402e9a0f4b6e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726860424763351235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8dcd882059b0fe1be82143b183508ab50d5688d42f4fd8d917bda34237eba96,PodSandboxId:5a505c52e820cb170c44f822bb0f5662b81a1567befe1e0296e6e9c359b4a0cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726860095699891078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa,PodSandboxId:983db580543713ff6b79c9d54e5d3d700566e8c06206f15b4fd8278675f97229,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726860038958952580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28022afd37d6d2f008deeded66b17c0a72113eb2ad5fa6907f1036e18f1d975f,PodSandboxId:484bc97168c87403e6d624f33c30eae4932c92e413d6fc360ca0cde0661bbbe1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726860038935052890,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6,PodSandboxId:212c62c39933f2f28d2f41ea89dd26b2c4b4c9f9ceaa334e43d1d4c8fdabac3c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726860027205042203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414,PodSandboxId:2409beaba30e5726b8d9b5f1b0e2faac96a8251d15d45802ade6f092deab1ef7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726860026953549127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29
-6f2c3fcc7e35,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42,PodSandboxId:5664e1f96c4c0c218278727eeed301014e9b3a0687a3db76fc8af725e4555183,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726860015825323175,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21,PodSandboxId:04efda436d7240785cb8c0b48a1928dfbc3b03ffd216666b108f27cbfbadedb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726860015795461437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8,PodSandboxId:96e2d4cf2c1fbe2fc2b1bd48e31218fd3dabf1869233ade215a1dea2d0c9b7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726860015822873372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b,PodSandboxId:d22dc1cbb45f98df2dd239baaa1834cf1d79630d7011a5acc083024c76958a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726860015790876301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=930af304-9117-4b80-99de-75ffb3ea593f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.650913103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d511ef65-4772-4528-a39b-72d0c60e197d name=/runtime.v1.RuntimeService/Version
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.651007899Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d511ef65-4772-4528-a39b-72d0c60e197d name=/runtime.v1.RuntimeService/Version
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.652428623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36731aca-d939-40a8-b951-0ee4f5297f63 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.652949085Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860672652925295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36731aca-d939-40a8-b951-0ee4f5297f63 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.653576931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7251ccbe-a4a6-40da-abcc-cacf77452c00 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.653648634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7251ccbe-a4a6-40da-abcc-cacf77452c00 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.654018153Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1032aae897c6a6e789586471840775b35b66bd755e2ea7221f6c2a7e9d01023f,PodSandboxId:78021bafe3d20c913f62fb213864b4120ec395f0f5f378ad7cbecfa5d6cc413b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726860462507047601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0475dc410c9bd2962c731c4869948eccbd23017afbe2bf565ebbd87fdeb2bf23,PodSandboxId:76d754085d774cda47005a0433633d32fe881e18fec2cb9bd995aef50fa7d786,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726860428957320778,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31c03c3121e36f8b680ffeb04be136d13ed0eb30cc4232945f16828840b7fcf,PodSandboxId:4e099d73b22682c0a5a5a1dde1626a039ae8d2d98ffc894094e53b18de416a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726860428727515994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b1326c5d456cb895967959e46bbd4ccd2743f8eebbf4ea2920d93996b086c5,PodSandboxId:38808a33eadabf60b3bf793daee35e8b017adba19989f83c8c108d030de0a94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726860428654175913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29-6f2c3fcc7e35,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60955aaf5949a780a0a25c5166eda54b8cfa0b6459fc2943c7962b3d0b2b479c,PodSandboxId:9f438bf46a92e7ffab993bc69d2ce1ab0a773e378252bf0e2ff344aa0a2c1065,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726860428685619092,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9bd3ed7c925e80838dbb28a891c683435a9d040feae22c3e18aa8e7273cc15,PodSandboxId:77bd787f41e6c6feee7f8596005b33a7314f2999fd11550c663b84f758a289d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726860424851395385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5e65f8c20a2386e254f6706107d0cc7f75f834b088e05d4f04adfc74984a60,PodSandboxId:5fd31bbab35fed3fb22d7232ee57a5e50103639d591865ca5cfe6e2c2cccef57,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726860424814308052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77a7e1a224a7e290b56fbe2966968e539e4d6246462134099cf5e363b83dffb,PodSandboxId:ac9232fb23f537a32cd6724d4f0aa7417b505d67a73444703a1614fd39c88ec8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726860424831160293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6c2d47645b3c320308300d42f92d21313904e5f30c2e0843aaa7c588014c301,PodSandboxId:ed88405164d64c4b8218fab6d9abcde896bfb41cb5c836aed409402e9a0f4b6e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726860424763351235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8dcd882059b0fe1be82143b183508ab50d5688d42f4fd8d917bda34237eba96,PodSandboxId:5a505c52e820cb170c44f822bb0f5662b81a1567befe1e0296e6e9c359b4a0cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726860095699891078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa,PodSandboxId:983db580543713ff6b79c9d54e5d3d700566e8c06206f15b4fd8278675f97229,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726860038958952580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28022afd37d6d2f008deeded66b17c0a72113eb2ad5fa6907f1036e18f1d975f,PodSandboxId:484bc97168c87403e6d624f33c30eae4932c92e413d6fc360ca0cde0661bbbe1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726860038935052890,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6,PodSandboxId:212c62c39933f2f28d2f41ea89dd26b2c4b4c9f9ceaa334e43d1d4c8fdabac3c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726860027205042203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414,PodSandboxId:2409beaba30e5726b8d9b5f1b0e2faac96a8251d15d45802ade6f092deab1ef7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726860026953549127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29
-6f2c3fcc7e35,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42,PodSandboxId:5664e1f96c4c0c218278727eeed301014e9b3a0687a3db76fc8af725e4555183,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726860015825323175,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21,PodSandboxId:04efda436d7240785cb8c0b48a1928dfbc3b03ffd216666b108f27cbfbadedb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726860015795461437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8,PodSandboxId:96e2d4cf2c1fbe2fc2b1bd48e31218fd3dabf1869233ade215a1dea2d0c9b7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726860015822873372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b,PodSandboxId:d22dc1cbb45f98df2dd239baaa1834cf1d79630d7011a5acc083024c76958a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726860015790876301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7251ccbe-a4a6-40da-abcc-cacf77452c00 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.697358545Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=727ac55b-9944-4176-a03d-d7c198a1e508 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.697434519Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=727ac55b-9944-4176-a03d-d7c198a1e508 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.698330493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35b1c579-63e0-43e2-ac61-55f3b1b1c526 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.698796584Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860672698770605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35b1c579-63e0-43e2-ac61-55f3b1b1c526 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.699289727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02b52e99-8c73-451e-aece-3c0173d81e6c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.699348783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02b52e99-8c73-451e-aece-3c0173d81e6c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:31:12 multinode-756894 crio[2698]: time="2024-09-20 19:31:12.700067848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1032aae897c6a6e789586471840775b35b66bd755e2ea7221f6c2a7e9d01023f,PodSandboxId:78021bafe3d20c913f62fb213864b4120ec395f0f5f378ad7cbecfa5d6cc413b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726860462507047601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0475dc410c9bd2962c731c4869948eccbd23017afbe2bf565ebbd87fdeb2bf23,PodSandboxId:76d754085d774cda47005a0433633d32fe881e18fec2cb9bd995aef50fa7d786,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726860428957320778,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31c03c3121e36f8b680ffeb04be136d13ed0eb30cc4232945f16828840b7fcf,PodSandboxId:4e099d73b22682c0a5a5a1dde1626a039ae8d2d98ffc894094e53b18de416a1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726860428727515994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b1326c5d456cb895967959e46bbd4ccd2743f8eebbf4ea2920d93996b086c5,PodSandboxId:38808a33eadabf60b3bf793daee35e8b017adba19989f83c8c108d030de0a94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726860428654175913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29-6f2c3fcc7e35,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60955aaf5949a780a0a25c5166eda54b8cfa0b6459fc2943c7962b3d0b2b479c,PodSandboxId:9f438bf46a92e7ffab993bc69d2ce1ab0a773e378252bf0e2ff344aa0a2c1065,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726860428685619092,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9bd3ed7c925e80838dbb28a891c683435a9d040feae22c3e18aa8e7273cc15,PodSandboxId:77bd787f41e6c6feee7f8596005b33a7314f2999fd11550c663b84f758a289d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726860424851395385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5e65f8c20a2386e254f6706107d0cc7f75f834b088e05d4f04adfc74984a60,PodSandboxId:5fd31bbab35fed3fb22d7232ee57a5e50103639d591865ca5cfe6e2c2cccef57,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726860424814308052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77a7e1a224a7e290b56fbe2966968e539e4d6246462134099cf5e363b83dffb,PodSandboxId:ac9232fb23f537a32cd6724d4f0aa7417b505d67a73444703a1614fd39c88ec8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726860424831160293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6c2d47645b3c320308300d42f92d21313904e5f30c2e0843aaa7c588014c301,PodSandboxId:ed88405164d64c4b8218fab6d9abcde896bfb41cb5c836aed409402e9a0f4b6e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726860424763351235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8dcd882059b0fe1be82143b183508ab50d5688d42f4fd8d917bda34237eba96,PodSandboxId:5a505c52e820cb170c44f822bb0f5662b81a1567befe1e0296e6e9c359b4a0cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726860095699891078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kr8zb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20cc00ef-3035-488b-8846-4f43d56fa236,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa,PodSandboxId:983db580543713ff6b79c9d54e5d3d700566e8c06206f15b4fd8278675f97229,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726860038958952580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-k7xq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38699bbe-9579-45e3-abe2-828348e890d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28022afd37d6d2f008deeded66b17c0a72113eb2ad5fa6907f1036e18f1d975f,PodSandboxId:484bc97168c87403e6d624f33c30eae4932c92e413d6fc360ca0cde0661bbbe1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726860038935052890,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 2a43c8dc-3fd3-4761-876b-2494967a2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6,PodSandboxId:212c62c39933f2f28d2f41ea89dd26b2c4b4c9f9ceaa334e43d1d4c8fdabac3c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726860027205042203,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2r822,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 94e0c1f9-bbfc-4362-89fa-daeaae236602,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414,PodSandboxId:2409beaba30e5726b8d9b5f1b0e2faac96a8251d15d45802ade6f092deab1ef7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726860026953549127,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5tkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c6ded21-2ef2-4fe8-be29
-6f2c3fcc7e35,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42,PodSandboxId:5664e1f96c4c0c218278727eeed301014e9b3a0687a3db76fc8af725e4555183,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726860015825323175,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 822c8ceecdfe4119d999c5ee754b8ea8,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21,PodSandboxId:04efda436d7240785cb8c0b48a1928dfbc3b03ffd216666b108f27cbfbadedb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726860015795461437,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6682239b7fb722a605bb9fb206b85a2,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8,PodSandboxId:96e2d4cf2c1fbe2fc2b1bd48e31218fd3dabf1869233ade215a1dea2d0c9b7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726860015822873372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56fdd834c9abaafc449abcb898099f8d,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b,PodSandboxId:d22dc1cbb45f98df2dd239baaa1834cf1d79630d7011a5acc083024c76958a99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726860015790876301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-756894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ca4c90e5672c1e7faf18f515c7ce595,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02b52e99-8c73-451e-aece-3c0173d81e6c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1032aae897c6a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   78021bafe3d20       busybox-7dff88458-kr8zb
	0475dc410c9bd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   76d754085d774       kindnet-2r822
	e31c03c3121e3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   4e099d73b2268       coredns-7c65d6cfc9-k7xq2
	60955aaf5949a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   9f438bf46a92e       storage-provisioner
	88b1326c5d456       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   38808a33eadab       kube-proxy-m5tkt
	ca9bd3ed7c925       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   77bd787f41e6c       kube-scheduler-multinode-756894
	c77a7e1a224a7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   ac9232fb23f53       kube-controller-manager-multinode-756894
	7b5e65f8c20a2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   5fd31bbab35fe       etcd-multinode-756894
	a6c2d47645b3c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   ed88405164d64       kube-apiserver-multinode-756894
	f8dcd882059b0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   5a505c52e820c       busybox-7dff88458-kr8zb
	72d4e37d4505a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   983db58054371       coredns-7c65d6cfc9-k7xq2
	28022afd37d6d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   484bc97168c87       storage-provisioner
	fccbad40f2b34       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   212c62c39933f       kindnet-2r822
	12fa8b93a3911       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   2409beaba30e5       kube-proxy-m5tkt
	4461a84038243       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   5664e1f96c4c0       etcd-multinode-756894
	fab8a49afdb38       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   96e2d4cf2c1fb       kube-apiserver-multinode-756894
	9e8c8df52527f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   04efda436d724       kube-scheduler-multinode-756894
	23a0fc48b5b4d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   d22dc1cbb45f9       kube-controller-manager-multinode-756894
	
	
	==> coredns [72d4e37d4505a966f9c27a077765be9a06e11ac9bc0e052057beff94627d97aa] <==
	[INFO] 10.244.0.3:45793 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001584314s
	[INFO] 10.244.0.3:59518 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000055636s
	[INFO] 10.244.0.3:38343 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000047642s
	[INFO] 10.244.0.3:37834 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001115807s
	[INFO] 10.244.0.3:42888 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000057199s
	[INFO] 10.244.0.3:41157 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000047496s
	[INFO] 10.244.0.3:58352 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065473s
	[INFO] 10.244.1.2:51514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152904s
	[INFO] 10.244.1.2:59373 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162696s
	[INFO] 10.244.1.2:44401 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091841s
	[INFO] 10.244.1.2:37696 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087115s
	[INFO] 10.244.0.3:42996 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132227s
	[INFO] 10.244.0.3:37331 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008351s
	[INFO] 10.244.0.3:58970 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010357s
	[INFO] 10.244.0.3:48326 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135472s
	[INFO] 10.244.1.2:45935 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130523s
	[INFO] 10.244.1.2:49766 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000224752s
	[INFO] 10.244.1.2:35175 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146748s
	[INFO] 10.244.1.2:39378 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131962s
	[INFO] 10.244.0.3:35114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108425s
	[INFO] 10.244.0.3:36492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100334s
	[INFO] 10.244.0.3:57706 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059811s
	[INFO] 10.244.0.3:57975 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000596s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e31c03c3121e36f8b680ffeb04be136d13ed0eb30cc4232945f16828840b7fcf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53894 - 943 "HINFO IN 1506214172435802275.7599767001225360048. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031021286s
	
	
	==> describe nodes <==
	Name:               multinode-756894
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-756894
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=multinode-756894
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_20_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:20:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-756894
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:31:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:27:08 +0000   Fri, 20 Sep 2024 19:20:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:27:08 +0000   Fri, 20 Sep 2024 19:20:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:27:08 +0000   Fri, 20 Sep 2024 19:20:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:27:08 +0000   Fri, 20 Sep 2024 19:20:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    multinode-756894
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ddf615a403e94b3bbed6b8abde987c04
	  System UUID:                ddf615a4-03e9-4b3b-bed6-b8abde987c04
	  Boot ID:                    2a1e1ca6-0967-488d-bf89-1abcc6d05f87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kr8zb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	  kube-system                 coredns-7c65d6cfc9-k7xq2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-756894                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-2r822                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-756894             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-756894    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-m5tkt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-756894             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-756894 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-756894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-756894 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-756894 event: Registered Node multinode-756894 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-756894 status is now: NodeReady
	  Normal  Starting                 4m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-756894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-756894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-756894 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node multinode-756894 event: Registered Node multinode-756894 in Controller
	
	
	Name:               multinode-756894-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-756894-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=multinode-756894
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T19_27_47_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:27:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-756894-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:28:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 19:28:17 +0000   Fri, 20 Sep 2024 19:29:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 19:28:17 +0000   Fri, 20 Sep 2024 19:29:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 19:28:17 +0000   Fri, 20 Sep 2024 19:29:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 19:28:17 +0000   Fri, 20 Sep 2024 19:29:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.204
	  Hostname:    multinode-756894-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 196b0cbe86e346c498b66aa6b18004c3
	  System UUID:                196b0cbe-86e3-46c4-98b6-6aa6b18004c3
	  Boot ID:                    7e230b42-61af-49bf-87bb-836299e7d24e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xbpkw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 kindnet-zxd86              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-4m9vh           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m54s                  kube-proxy       
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-756894-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-756894-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-756894-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m41s                  kubelet          Node multinode-756894-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m27s (x2 over 3m27s)  kubelet          Node multinode-756894-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s (x2 over 3m27s)  kubelet          Node multinode-756894-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s (x2 over 3m27s)  kubelet          Node multinode-756894-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m22s                  node-controller  Node multinode-756894-m02 event: Registered Node multinode-756894-m02 in Controller
	  Normal  NodeReady                3m8s                   kubelet          Node multinode-756894-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-756894-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.062780] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.180277] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.114991] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.269161] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.868815] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.400088] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.060559] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.983400] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.095278] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.707595] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.086888] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.398435] kauditd_printk_skb: 60 callbacks suppressed
	[Sep20 19:21] kauditd_printk_skb: 12 callbacks suppressed
	[Sep20 19:26] systemd-fstab-generator[2623]: Ignoring "noauto" option for root device
	[  +0.139557] systemd-fstab-generator[2635]: Ignoring "noauto" option for root device
	[  +0.183646] systemd-fstab-generator[2649]: Ignoring "noauto" option for root device
	[  +0.157658] systemd-fstab-generator[2661]: Ignoring "noauto" option for root device
	[  +0.279246] systemd-fstab-generator[2689]: Ignoring "noauto" option for root device
	[Sep20 19:27] systemd-fstab-generator[2783]: Ignoring "noauto" option for root device
	[  +0.080233] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.236230] systemd-fstab-generator[2905]: Ignoring "noauto" option for root device
	[  +4.690570] kauditd_printk_skb: 74 callbacks suppressed
	[ +13.611127] systemd-fstab-generator[3746]: Ignoring "noauto" option for root device
	[  +0.094517] kauditd_printk_skb: 34 callbacks suppressed
	[ +20.173547] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [4461a840382433ce4c8a6f37fc819725d31e2075f1670cb56237965555159f42] <==
	{"level":"warn","ts":"2024-09-20T19:22:11.963629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.709847ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T19:22:11.963672Z","caller":"traceutil/trace.go:171","msg":"trace[1618710968] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:585; }","duration":"413.750697ms","start":"2024-09-20T19:22:11.549913Z","end":"2024-09-20T19:22:11.963664Z","steps":["trace[1618710968] 'agreement among raft nodes before linearized reading'  (duration: 413.661995ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:22:11.963830Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:22:11.549888Z","time spent":"413.932122ms","remote":"127.0.0.1:54558","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges\" limit:1 "}
	{"level":"warn","ts":"2024-09-20T19:22:11.963356Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:22:11.547376Z","time spent":"415.938679ms","remote":"127.0.0.1:54592","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":42,"response count":0,"response size":2073,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-756894-m03\" mod_revision:583 > success:<request_put:<key:\"/registry/minions/multinode-756894-m03\" value_size:1974 >> failure:<request_range:<key:\"/registry/minions/multinode-756894-m03\" > >"}
	{"level":"warn","ts":"2024-09-20T19:22:11.964040Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.551819ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-09-20T19:22:11.964076Z","caller":"traceutil/trace.go:171","msg":"trace[232552228] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:585; }","duration":"353.588641ms","start":"2024-09-20T19:22:11.610481Z","end":"2024-09-20T19:22:11.964070Z","steps":["trace[232552228] 'agreement among raft nodes before linearized reading'  (duration: 353.455385ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:22:11.964119Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:22:11.610435Z","time spent":"353.678754ms","remote":"127.0.0.1:54588","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1140,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-09-20T19:22:12.073638Z","caller":"traceutil/trace.go:171","msg":"trace[1341622855] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"107.635451ms","start":"2024-09-20T19:22:11.965988Z","end":"2024-09-20T19:22:12.073623Z","steps":["trace[1341622855] 'process raft request'  (duration: 100.893522ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:22:12.073968Z","caller":"traceutil/trace.go:171","msg":"trace[282571320] transaction","detail":"{read_only:false; number_of_response:1; response_revision:586; }","duration":"103.08865ms","start":"2024-09-20T19:22:11.970870Z","end":"2024-09-20T19:22:12.073958Z","steps":["trace[282571320] 'process raft request'  (duration: 103.017745ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:22:12.074119Z","caller":"traceutil/trace.go:171","msg":"trace[1576313543] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"101.05498ms","start":"2024-09-20T19:22:11.973059Z","end":"2024-09-20T19:22:12.074114Z","steps":["trace[1576313543] 'process raft request'  (duration: 100.885914ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:22:12.074277Z","caller":"traceutil/trace.go:171","msg":"trace[821219588] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"101.294044ms","start":"2024-09-20T19:22:11.972978Z","end":"2024-09-20T19:22:12.074272Z","steps":["trace[821219588] 'process raft request'  (duration: 100.938334ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:22:12.074517Z","caller":"traceutil/trace.go:171","msg":"trace[1898443886] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"101.404358ms","start":"2024-09-20T19:22:11.973106Z","end":"2024-09-20T19:22:12.074511Z","steps":["trace[1898443886] 'process raft request'  (duration: 100.855462ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:22:12.073992Z","caller":"traceutil/trace.go:171","msg":"trace[2095581807] linearizableReadLoop","detail":"{readStateIndex:622; appliedIndex:620; }","duration":"102.90822ms","start":"2024-09-20T19:22:11.971076Z","end":"2024-09-20T19:22:12.073984Z","steps":["trace[2095581807] 'read index received'  (duration: 95.675567ms)","trace[2095581807] 'applied index is now lower than readState.Index'  (duration: 7.232261ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T19:22:12.074779Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.690716ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-756894-m03\" ","response":"range_response_count:1 size:2141"}
	{"level":"info","ts":"2024-09-20T19:22:12.074815Z","caller":"traceutil/trace.go:171","msg":"trace[1363851090] range","detail":"{range_begin:/registry/minions/multinode-756894-m03; range_end:; response_count:1; response_revision:589; }","duration":"103.735381ms","start":"2024-09-20T19:22:11.971073Z","end":"2024-09-20T19:22:12.074808Z","steps":["trace[1363851090] 'agreement among raft nodes before linearized reading'  (duration: 103.674148ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:25:22.660008Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T19:25:22.660155Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-756894","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	{"level":"warn","ts":"2024-09-20T19:25:22.660336Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T19:25:22.660488Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T19:25:22.738992Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T19:25:22.739054Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T19:25:22.739181Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e34fba8f5739efe8","current-leader-member-id":"e34fba8f5739efe8"}
	{"level":"info","ts":"2024-09-20T19:25:22.741890Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-20T19:25:22.742032Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-20T19:25:22.742057Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-756894","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	
	
	==> etcd [7b5e65f8c20a2386e254f6706107d0cc7f75f834b088e05d4f04adfc74984a60] <==
	{"level":"info","ts":"2024-09-20T19:27:05.439878Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-09-20T19:27:05.440235Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e34fba8f5739efe8","initial-advertise-peer-urls":["https://192.168.39.168:2380"],"listen-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.168:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T19:27:05.440313Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T19:27:05.441805Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","added-peer-id":"e34fba8f5739efe8","added-peer-peer-urls":["https://192.168.39.168:2380"]}
	{"level":"info","ts":"2024-09-20T19:27:05.442252Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:27:05.442337Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:27:05.441851Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T19:27:05.442761Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-20T19:27:06.631297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T19:27:06.631433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T19:27:06.631472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 received MsgPreVoteResp from e34fba8f5739efe8 at term 2"}
	{"level":"info","ts":"2024-09-20T19:27:06.631508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T19:27:06.631533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 received MsgVoteResp from e34fba8f5739efe8 at term 3"}
	{"level":"info","ts":"2024-09-20T19:27:06.631559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T19:27:06.631585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e34fba8f5739efe8 elected leader e34fba8f5739efe8 at term 3"}
	{"level":"info","ts":"2024-09-20T19:27:06.638261Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:27:06.638212Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e34fba8f5739efe8","local-member-attributes":"{Name:multinode-756894 ClientURLs:[https://192.168.39.168:2379]}","request-path":"/0/members/e34fba8f5739efe8/attributes","cluster-id":"f729467791c9db0d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T19:27:06.639072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:27:06.639329Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:27:06.639365Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:27:06.639748Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:27:06.640114Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:27:06.640888Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.168:2379"}
	{"level":"info","ts":"2024-09-20T19:27:06.640985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:27:51.122309Z","caller":"traceutil/trace.go:171","msg":"trace[1699313954] transaction","detail":"{read_only:false; response_revision:1030; number_of_response:1; }","duration":"206.099715ms","start":"2024-09-20T19:27:50.916160Z","end":"2024-09-20T19:27:51.122259Z","steps":["trace[1699313954] 'process raft request'  (duration: 205.955081ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:31:13 up 11 min,  0 users,  load average: 0.11, 0.13, 0.09
	Linux multinode-756894 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0475dc410c9bd2962c731c4869948eccbd23017afbe2bf565ebbd87fdeb2bf23] <==
	I0920 19:30:09.922078       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:30:19.929416       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:30:19.929474       1 main.go:299] handling current node
	I0920 19:30:19.929492       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:30:19.929498       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:30:29.923528       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:30:29.923662       1 main.go:299] handling current node
	I0920 19:30:29.923743       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:30:29.923769       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:30:39.922084       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:30:39.922208       1 main.go:299] handling current node
	I0920 19:30:39.922237       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:30:39.922255       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:30:49.930443       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:30:49.930505       1 main.go:299] handling current node
	I0920 19:30:49.930525       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:30:49.930531       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:30:59.921537       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:30:59.921665       1 main.go:299] handling current node
	I0920 19:30:59.921743       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:30:59.921768       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:31:09.921772       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:31:09.921855       1 main.go:299] handling current node
	I0920 19:31:09.921877       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:31:09.921883       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [fccbad40f2b3455e5b2be6eb12686d14833c14db21d1480e73f0f2e178f535d6] <==
	I0920 19:24:38.224187       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	I0920 19:24:48.217379       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:24:48.217506       1 main.go:299] handling current node
	I0920 19:24:48.217543       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:24:48.217563       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:24:48.217736       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0920 19:24:48.217763       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	I0920 19:24:58.215889       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:24:58.216006       1 main.go:299] handling current node
	I0920 19:24:58.216037       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:24:58.216055       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:24:58.216206       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0920 19:24:58.216228       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	I0920 19:25:08.217913       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:25:08.218008       1 main.go:299] handling current node
	I0920 19:25:08.218044       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:25:08.218052       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:25:08.218211       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0920 19:25:08.218243       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	I0920 19:25:18.224267       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0920 19:25:18.224371       1 main.go:299] handling current node
	I0920 19:25:18.224409       1 main.go:295] Handling node with IPs: map[192.168.39.204:{}]
	I0920 19:25:18.224428       1 main.go:322] Node multinode-756894-m02 has CIDR [10.244.1.0/24] 
	I0920 19:25:18.224614       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0920 19:25:18.224666       1 main.go:322] Node multinode-756894-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a6c2d47645b3c320308300d42f92d21313904e5f30c2e0843aaa7c588014c301] <==
	I0920 19:27:08.051463       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 19:27:08.051502       1 policy_source.go:224] refreshing policies
	I0920 19:27:08.058086       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 19:27:08.058164       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 19:27:08.058193       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 19:27:08.058218       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 19:27:08.058268       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 19:27:08.060074       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 19:27:08.060613       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 19:27:08.060850       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 19:27:08.060963       1 aggregator.go:171] initial CRD sync complete...
	I0920 19:27:08.060989       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 19:27:08.061011       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 19:27:08.061032       1 cache.go:39] Caches are synced for autoregister controller
	I0920 19:27:08.063361       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0920 19:27:08.067128       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0920 19:27:08.085574       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 19:27:08.877013       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 19:27:10.079393       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 19:27:10.198262       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 19:27:10.216224       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 19:27:10.284082       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 19:27:10.293426       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 19:27:11.351394       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 19:27:11.598385       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [fab8a49afdb38089ed0f1190eef6bdf74f69a5b5bca37ddf5156576bed7e64d8] <==
	I0920 19:20:20.934082       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 19:20:20.949478       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 19:20:25.721483       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0920 19:20:26.022212       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0920 19:21:36.622095       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:60976: use of closed network connection
	E0920 19:21:36.805905       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:60992: use of closed network connection
	E0920 19:21:37.030494       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32792: use of closed network connection
	E0920 19:21:37.203996       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32810: use of closed network connection
	E0920 19:21:37.371898       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32826: use of closed network connection
	E0920 19:21:37.556950       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32838: use of closed network connection
	E0920 19:21:37.830820       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32874: use of closed network connection
	E0920 19:21:38.002893       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32892: use of closed network connection
	E0920 19:21:38.182972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32902: use of closed network connection
	E0920 19:21:38.356269       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:32910: use of closed network connection
	E0920 19:22:11.874856       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0920 19:22:11.874870       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 19:22:11.874906       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 10.975µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0920 19:22:11.876250       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 19:22:11.876296       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0920 19:22:11.877468       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 19:22:11.877508       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0920 19:22:11.878659       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0920 19:22:11.878835       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.210266ms" method="PATCH" path="/api/v1/namespaces/default/events/multinode-756894-m03.17f70a23c909ce57" result=null
	E0920 19:22:11.880171       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.481087ms" method="GET" path="/api/v1/nodes/multinode-756894-m03" result=null
	I0920 19:25:22.660853       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [23a0fc48b5b4d9d4c9ae9065d80b0b2d042a7d5e7919702776ea5eec96d6d70b] <==
	I0920 19:22:58.835358       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m02"
	I0920 19:22:58.835589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:22:59.884223       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m02"
	I0920 19:22:59.884949       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-756894-m03\" does not exist"
	I0920 19:22:59.904652       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-756894-m03" podCIDRs=["10.244.3.0/24"]
	I0920 19:22:59.904824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:22:59.904848       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:22:59.904914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:00.310326       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:00.353080       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:00.713447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:10.227188       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:17.141483       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m02"
	I0920 19:23:17.142120       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:17.154115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:23:20.234824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:24:00.255253       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m03"
	I0920 19:24:00.255666       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m02"
	I0920 19:24:00.259259       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:24:00.281111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m02"
	I0920 19:24:00.284964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:24:00.312776       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.584036ms"
	I0920 19:24:00.313856       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.108µs"
	I0920 19:24:05.330578       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m02"
	I0920 19:24:15.416456       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	
	
	==> kube-controller-manager [c77a7e1a224a7e290b56fbe2966968e539e4d6246462134099cf5e363b83dffb] <==
	I0920 19:28:24.754590       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-756894-m03" podCIDRs=["10.244.2.0/24"]
	I0920 19:28:24.755296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:24.755474       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:24.771985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:25.147608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:25.475102       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:26.518625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:35.118677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:42.951413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:42.952124       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m02"
	I0920 19:28:42.960660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:46.466250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:47.632394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:47.647387       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:28:48.103548       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-756894-m02"
	I0920 19:28:48.104045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m03"
	I0920 19:29:31.328758       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-kr8ph"
	I0920 19:29:31.358311       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-kr8ph"
	I0920 19:29:31.358355       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-djt5n"
	I0920 19:29:31.385925       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-djt5n"
	I0920 19:29:31.490472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m02"
	I0920 19:29:31.509788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m02"
	I0920 19:29:31.513597       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.618541ms"
	I0920 19:29:31.513680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.885µs"
	I0920 19:29:36.552598       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-756894-m02"
	
	
	==> kube-proxy [12fa8b93a3911dbfe0fc55628c90d22e218afdfbe8f5e7195f783b7c7c8af414] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 19:20:27.410928       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 19:20:27.436519       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0920 19:20:27.437783       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:20:27.488313       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 19:20:27.488360       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 19:20:27.488385       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:20:27.491367       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:20:27.491841       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:20:27.491870       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:20:27.493041       1 config.go:199] "Starting service config controller"
	I0920 19:20:27.493090       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:20:27.493129       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:20:27.493151       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:20:27.493597       1 config.go:328] "Starting node config controller"
	I0920 19:20:27.493645       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:20:27.593190       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:20:27.593389       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:20:27.593738       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [88b1326c5d456cb895967959e46bbd4ccd2743f8eebbf4ea2920d93996b086c5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 19:27:09.094249       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 19:27:09.103439       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0920 19:27:09.103521       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:27:09.184681       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 19:27:09.184809       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 19:27:09.184832       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:27:09.187279       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:27:09.187518       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:27:09.187548       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:27:09.189077       1 config.go:199] "Starting service config controller"
	I0920 19:27:09.189141       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:27:09.189173       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:27:09.189195       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:27:09.189790       1 config.go:328] "Starting node config controller"
	I0920 19:27:09.189818       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:27:09.290749       1 shared_informer.go:320] Caches are synced for node config
	I0920 19:27:09.290798       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:27:09.290819       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9e8c8df52527f8e0973846dfbbb41192f0968ba9bf00c00c03d6b7da2cc76c21] <==
	W0920 19:20:18.507151       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 19:20:18.508752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 19:20:18.508813       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:18.507208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 19:20:18.508932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:18.507637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 19:20:18.509062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.385403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 19:20:19.385491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.394665       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 19:20:19.394809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.451175       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 19:20:19.451583       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 19:20:19.458030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:20:19.458111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.534432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:20:19.534535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.556495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:20:19.556583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.621516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 19:20:19.621641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:20:19.718783       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 19:20:19.718891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 19:20:21.602502       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 19:25:22.661422       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ca9bd3ed7c925e80838dbb28a891c683435a9d040feae22c3e18aa8e7273cc15] <==
	I0920 19:27:05.935135       1 serving.go:386] Generated self-signed cert in-memory
	W0920 19:27:07.922790       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 19:27:07.922885       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 19:27:07.922971       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 19:27:07.922983       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 19:27:07.997294       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 19:27:08.001748       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:27:08.005799       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 19:27:08.005866       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 19:27:08.008998       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 19:27:08.010832       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 19:27:08.106294       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:29:54 multinode-756894 kubelet[2912]: E0920 19:29:54.206463    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860594205774041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:30:04 multinode-756894 kubelet[2912]: E0920 19:30:04.163771    2912 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 19:30:04 multinode-756894 kubelet[2912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 19:30:04 multinode-756894 kubelet[2912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 19:30:04 multinode-756894 kubelet[2912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 19:30:04 multinode-756894 kubelet[2912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 19:30:04 multinode-756894 kubelet[2912]: E0920 19:30:04.207732    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860604207203496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:30:04 multinode-756894 kubelet[2912]: E0920 19:30:04.207774    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860604207203496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:30:14 multinode-756894 kubelet[2912]: E0920 19:30:14.209460    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860614208782641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:30:14 multinode-756894 kubelet[2912]: E0920 19:30:14.209508    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860614208782641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:30:24 multinode-756894 kubelet[2912]: E0920 19:30:24.210497    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860624210263763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:30:24 multinode-756894 kubelet[2912]: E0920 19:30:24.210518    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860624210263763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:30:34 multinode-756894 kubelet[2912]: E0920 19:30:34.212461    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860634211831869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:30:34 multinode-756894 kubelet[2912]: E0920 19:30:34.212881    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860634211831869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:30:44 multinode-756894 kubelet[2912]: E0920 19:30:44.214589    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860644214188508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:30:44 multinode-756894 kubelet[2912]: E0920 19:30:44.214613    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860644214188508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:30:54 multinode-756894 kubelet[2912]: E0920 19:30:54.221928    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860654219146226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:30:54 multinode-756894 kubelet[2912]: E0920 19:30:54.223641    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860654219146226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:31:04 multinode-756894 kubelet[2912]: E0920 19:31:04.162997    2912 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 19:31:04 multinode-756894 kubelet[2912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 19:31:04 multinode-756894 kubelet[2912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 19:31:04 multinode-756894 kubelet[2912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 19:31:04 multinode-756894 kubelet[2912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 19:31:04 multinode-756894 kubelet[2912]: E0920 19:31:04.225968    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860664225572707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:31:04 multinode-756894 kubelet[2912]: E0920 19:31:04.226006    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726860664225572707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:31:12.297027  784258 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19678-739831/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-756894 -n multinode-756894
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-756894 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.57s)

                                                
                                    
x
+
TestPreload (166.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-232055 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0920 19:36:24.179730  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-232055 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m37.006597879s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-232055 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-232055 image pull gcr.io/k8s-minikube/busybox: (1.160889031s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-232055
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-232055: (7.284228244s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-232055 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-232055 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (57.98482426s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-232055 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-20 19:37:45.224993722 +0000 UTC m=+5117.070638314
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-232055 -n test-preload-232055
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-232055 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-232055 logs -n 25: (1.085383298s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n multinode-756894 sudo cat                                       | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-756894-m03_multinode-756894.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-756894 cp multinode-756894-m03:/home/docker/cp-test.txt                       | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m02:/home/docker/cp-test_multinode-756894-m03_multinode-756894-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n                                                                 | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | multinode-756894-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-756894 ssh -n multinode-756894-m02 sudo cat                                   | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-756894-m03_multinode-756894-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-756894 node stop m03                                                          | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	| node    | multinode-756894 node start                                                             | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-756894                                                                | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC |                     |
	| stop    | -p multinode-756894                                                                     | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC |                     |
	| start   | -p multinode-756894                                                                     | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:28 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-756894                                                                | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:28 UTC |                     |
	| node    | multinode-756894 node delete                                                            | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:28 UTC | 20 Sep 24 19:28 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-756894 stop                                                                   | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:28 UTC |                     |
	| start   | -p multinode-756894                                                                     | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:31 UTC | 20 Sep 24 19:34 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-756894                                                                | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:34 UTC |                     |
	| start   | -p multinode-756894-m02                                                                 | multinode-756894-m02 | jenkins | v1.34.0 | 20 Sep 24 19:34 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-756894-m03                                                                 | multinode-756894-m03 | jenkins | v1.34.0 | 20 Sep 24 19:34 UTC | 20 Sep 24 19:34 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-756894                                                                 | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:34 UTC |                     |
	| delete  | -p multinode-756894-m03                                                                 | multinode-756894-m03 | jenkins | v1.34.0 | 20 Sep 24 19:34 UTC | 20 Sep 24 19:34 UTC |
	| delete  | -p multinode-756894                                                                     | multinode-756894     | jenkins | v1.34.0 | 20 Sep 24 19:35 UTC | 20 Sep 24 19:35 UTC |
	| start   | -p test-preload-232055                                                                  | test-preload-232055  | jenkins | v1.34.0 | 20 Sep 24 19:35 UTC | 20 Sep 24 19:36 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-232055 image pull                                                          | test-preload-232055  | jenkins | v1.34.0 | 20 Sep 24 19:36 UTC | 20 Sep 24 19:36 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-232055                                                                  | test-preload-232055  | jenkins | v1.34.0 | 20 Sep 24 19:36 UTC | 20 Sep 24 19:36 UTC |
	| start   | -p test-preload-232055                                                                  | test-preload-232055  | jenkins | v1.34.0 | 20 Sep 24 19:36 UTC | 20 Sep 24 19:37 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-232055 image list                                                          | test-preload-232055  | jenkins | v1.34.0 | 20 Sep 24 19:37 UTC | 20 Sep 24 19:37 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:36:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:36:47.054348  786638 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:36:47.054483  786638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:36:47.054493  786638 out.go:358] Setting ErrFile to fd 2...
	I0920 19:36:47.054498  786638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:36:47.054683  786638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 19:36:47.055214  786638 out.go:352] Setting JSON to false
	I0920 19:36:47.056159  786638 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11957,"bootTime":1726849050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:36:47.056258  786638 start.go:139] virtualization: kvm guest
	I0920 19:36:47.058619  786638 out.go:177] * [test-preload-232055] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:36:47.060013  786638 notify.go:220] Checking for updates...
	I0920 19:36:47.060025  786638 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:36:47.061341  786638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:36:47.062654  786638 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 19:36:47.063915  786638 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 19:36:47.065269  786638 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:36:47.066513  786638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:36:47.068247  786638 config.go:182] Loaded profile config "test-preload-232055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0920 19:36:47.068650  786638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:36:47.068697  786638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:36:47.084057  786638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0920 19:36:47.084530  786638 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:36:47.085144  786638 main.go:141] libmachine: Using API Version  1
	I0920 19:36:47.085166  786638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:36:47.085485  786638 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:36:47.085723  786638 main.go:141] libmachine: (test-preload-232055) Calling .DriverName
	I0920 19:36:47.087602  786638 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 19:36:47.088837  786638 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:36:47.089158  786638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:36:47.089200  786638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:36:47.103862  786638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I0920 19:36:47.104350  786638 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:36:47.104797  786638 main.go:141] libmachine: Using API Version  1
	I0920 19:36:47.104822  786638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:36:47.105125  786638 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:36:47.105275  786638 main.go:141] libmachine: (test-preload-232055) Calling .DriverName
	I0920 19:36:47.138649  786638 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:36:47.139981  786638 start.go:297] selected driver: kvm2
	I0920 19:36:47.139996  786638 start.go:901] validating driver "kvm2" against &{Name:test-preload-232055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-232055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:36:47.140099  786638 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:36:47.140826  786638 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:36:47.140909  786638 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:36:47.156327  786638 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:36:47.156675  786638 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:36:47.156703  786638 cni.go:84] Creating CNI manager for ""
	I0920 19:36:47.156750  786638 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:36:47.156805  786638 start.go:340] cluster config:
	{Name:test-preload-232055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-232055 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:36:47.156927  786638 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:36:47.159667  786638 out.go:177] * Starting "test-preload-232055" primary control-plane node in "test-preload-232055" cluster
	I0920 19:36:47.160713  786638 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0920 19:36:47.191127  786638 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0920 19:36:47.191158  786638 cache.go:56] Caching tarball of preloaded images
	I0920 19:36:47.191328  786638 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0920 19:36:47.193016  786638 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0920 19:36:47.194097  786638 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0920 19:36:47.225691  786638 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0920 19:36:50.345491  786638 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0920 19:36:50.345590  786638 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0920 19:36:51.217160  786638 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0920 19:36:51.217306  786638 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/config.json ...
	I0920 19:36:51.217537  786638 start.go:360] acquireMachinesLock for test-preload-232055: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:36:51.217612  786638 start.go:364] duration metric: took 46.484µs to acquireMachinesLock for "test-preload-232055"
	I0920 19:36:51.217628  786638 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:36:51.217634  786638 fix.go:54] fixHost starting: 
	I0920 19:36:51.217908  786638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:36:51.217946  786638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:36:51.232853  786638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I0920 19:36:51.233378  786638 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:36:51.233866  786638 main.go:141] libmachine: Using API Version  1
	I0920 19:36:51.233889  786638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:36:51.234244  786638 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:36:51.234431  786638 main.go:141] libmachine: (test-preload-232055) Calling .DriverName
	I0920 19:36:51.234673  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetState
	I0920 19:36:51.236363  786638 fix.go:112] recreateIfNeeded on test-preload-232055: state=Stopped err=<nil>
	I0920 19:36:51.236395  786638 main.go:141] libmachine: (test-preload-232055) Calling .DriverName
	W0920 19:36:51.236568  786638 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:36:51.239456  786638 out.go:177] * Restarting existing kvm2 VM for "test-preload-232055" ...
	I0920 19:36:51.240958  786638 main.go:141] libmachine: (test-preload-232055) Calling .Start
	I0920 19:36:51.241138  786638 main.go:141] libmachine: (test-preload-232055) Ensuring networks are active...
	I0920 19:36:51.241815  786638 main.go:141] libmachine: (test-preload-232055) Ensuring network default is active
	I0920 19:36:51.242081  786638 main.go:141] libmachine: (test-preload-232055) Ensuring network mk-test-preload-232055 is active
	I0920 19:36:51.242342  786638 main.go:141] libmachine: (test-preload-232055) Getting domain xml...
	I0920 19:36:51.243010  786638 main.go:141] libmachine: (test-preload-232055) Creating domain...
	I0920 19:36:52.425480  786638 main.go:141] libmachine: (test-preload-232055) Waiting to get IP...
	I0920 19:36:52.426429  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:36:52.426763  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:36:52.426833  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:36:52.426754  786689 retry.go:31] will retry after 220.237234ms: waiting for machine to come up
	I0920 19:36:52.648323  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:36:52.648815  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:36:52.648846  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:36:52.648780  786689 retry.go:31] will retry after 248.2336ms: waiting for machine to come up
	I0920 19:36:52.898277  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:36:52.898647  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:36:52.898670  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:36:52.898605  786689 retry.go:31] will retry after 407.518058ms: waiting for machine to come up
	I0920 19:36:53.308046  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:36:53.308435  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:36:53.308461  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:36:53.308395  786689 retry.go:31] will retry after 583.537441ms: waiting for machine to come up
	I0920 19:36:53.893142  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:36:53.893550  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:36:53.893574  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:36:53.893499  786689 retry.go:31] will retry after 493.097771ms: waiting for machine to come up
	I0920 19:36:54.388114  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:36:54.388629  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:36:54.388655  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:36:54.388609  786689 retry.go:31] will retry after 748.809968ms: waiting for machine to come up
	I0920 19:36:55.138596  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:36:55.138989  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:36:55.139021  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:36:55.138917  786689 retry.go:31] will retry after 935.437101ms: waiting for machine to come up
	I0920 19:36:56.076044  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:36:56.076449  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:36:56.076475  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:36:56.076393  786689 retry.go:31] will retry after 1.406082076s: waiting for machine to come up
	I0920 19:36:57.484449  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:36:57.484905  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:36:57.484933  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:36:57.484864  786689 retry.go:31] will retry after 1.369498923s: waiting for machine to come up
	I0920 19:36:58.856351  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:36:58.856764  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:36:58.856791  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:36:58.856722  786689 retry.go:31] will retry after 1.889178688s: waiting for machine to come up
	I0920 19:37:00.748902  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:00.749320  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:37:00.749344  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:37:00.749258  786689 retry.go:31] will retry after 2.127427875s: waiting for machine to come up
	I0920 19:37:02.878173  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:02.878637  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:37:02.878669  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:37:02.878589  786689 retry.go:31] will retry after 3.253258606s: waiting for machine to come up
	I0920 19:37:06.136166  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:06.136567  786638 main.go:141] libmachine: (test-preload-232055) DBG | unable to find current IP address of domain test-preload-232055 in network mk-test-preload-232055
	I0920 19:37:06.136598  786638 main.go:141] libmachine: (test-preload-232055) DBG | I0920 19:37:06.136531  786689 retry.go:31] will retry after 3.979627441s: waiting for machine to come up
	I0920 19:37:10.119964  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.120416  786638 main.go:141] libmachine: (test-preload-232055) Found IP for machine: 192.168.39.234
	I0920 19:37:10.120450  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has current primary IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.120459  786638 main.go:141] libmachine: (test-preload-232055) Reserving static IP address...
	I0920 19:37:10.121041  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "test-preload-232055", mac: "52:54:00:bf:04:df", ip: "192.168.39.234"} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:10.121082  786638 main.go:141] libmachine: (test-preload-232055) DBG | skip adding static IP to network mk-test-preload-232055 - found existing host DHCP lease matching {name: "test-preload-232055", mac: "52:54:00:bf:04:df", ip: "192.168.39.234"}
	I0920 19:37:10.121091  786638 main.go:141] libmachine: (test-preload-232055) Reserved static IP address: 192.168.39.234
	I0920 19:37:10.121110  786638 main.go:141] libmachine: (test-preload-232055) Waiting for SSH to be available...
	I0920 19:37:10.121125  786638 main.go:141] libmachine: (test-preload-232055) DBG | Getting to WaitForSSH function...
	I0920 19:37:10.123402  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.123727  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:10.123757  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.123892  786638 main.go:141] libmachine: (test-preload-232055) DBG | Using SSH client type: external
	I0920 19:37:10.123914  786638 main.go:141] libmachine: (test-preload-232055) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/test-preload-232055/id_rsa (-rw-------)
	I0920 19:37:10.123960  786638 main.go:141] libmachine: (test-preload-232055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/test-preload-232055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:37:10.123974  786638 main.go:141] libmachine: (test-preload-232055) DBG | About to run SSH command:
	I0920 19:37:10.123986  786638 main.go:141] libmachine: (test-preload-232055) DBG | exit 0
	I0920 19:37:10.250801  786638 main.go:141] libmachine: (test-preload-232055) DBG | SSH cmd err, output: <nil>: 
	I0920 19:37:10.251158  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetConfigRaw
	I0920 19:37:10.251937  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetIP
	I0920 19:37:10.254380  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.254754  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:10.254784  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.255064  786638 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/config.json ...
	I0920 19:37:10.255294  786638 machine.go:93] provisionDockerMachine start ...
	I0920 19:37:10.255316  786638 main.go:141] libmachine: (test-preload-232055) Calling .DriverName
	I0920 19:37:10.255522  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHHostname
	I0920 19:37:10.257802  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.258123  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:10.258153  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.258283  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHPort
	I0920 19:37:10.258459  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:10.258614  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:10.258750  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHUsername
	I0920 19:37:10.258926  786638 main.go:141] libmachine: Using SSH client type: native
	I0920 19:37:10.259123  786638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0920 19:37:10.259135  786638 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:37:10.370966  786638 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 19:37:10.370994  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetMachineName
	I0920 19:37:10.371242  786638 buildroot.go:166] provisioning hostname "test-preload-232055"
	I0920 19:37:10.371278  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetMachineName
	I0920 19:37:10.371467  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHHostname
	I0920 19:37:10.374084  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.374393  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:10.374430  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.374563  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHPort
	I0920 19:37:10.374742  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:10.374911  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:10.375087  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHUsername
	I0920 19:37:10.375272  786638 main.go:141] libmachine: Using SSH client type: native
	I0920 19:37:10.375498  786638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0920 19:37:10.375515  786638 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-232055 && echo "test-preload-232055" | sudo tee /etc/hostname
	I0920 19:37:10.500611  786638 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-232055
	
	I0920 19:37:10.500639  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHHostname
	I0920 19:37:10.503353  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.503806  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:10.503830  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.503986  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHPort
	I0920 19:37:10.504162  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:10.504355  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:10.504476  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHUsername
	I0920 19:37:10.504648  786638 main.go:141] libmachine: Using SSH client type: native
	I0920 19:37:10.504885  786638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0920 19:37:10.504905  786638 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-232055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-232055/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-232055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:37:10.627677  786638 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:37:10.627707  786638 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 19:37:10.627728  786638 buildroot.go:174] setting up certificates
	I0920 19:37:10.627739  786638 provision.go:84] configureAuth start
	I0920 19:37:10.627747  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetMachineName
	I0920 19:37:10.628073  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetIP
	I0920 19:37:10.630821  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.631247  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:10.631283  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.631411  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHHostname
	I0920 19:37:10.633711  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.634020  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:10.634044  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.634186  786638 provision.go:143] copyHostCerts
	I0920 19:37:10.634241  786638 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 19:37:10.634261  786638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 19:37:10.634326  786638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 19:37:10.634429  786638 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 19:37:10.634437  786638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 19:37:10.634462  786638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 19:37:10.634531  786638 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 19:37:10.634540  786638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 19:37:10.634575  786638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 19:37:10.634654  786638 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.test-preload-232055 san=[127.0.0.1 192.168.39.234 localhost minikube test-preload-232055]
	I0920 19:37:10.882931  786638 provision.go:177] copyRemoteCerts
	I0920 19:37:10.882999  786638 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:37:10.883032  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHHostname
	I0920 19:37:10.885922  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.886208  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:10.886229  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:10.886398  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHPort
	I0920 19:37:10.886628  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:10.886807  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHUsername
	I0920 19:37:10.886955  786638 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/test-preload-232055/id_rsa Username:docker}
	I0920 19:37:10.972923  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:37:10.996335  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 19:37:11.019991  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:37:11.042841  786638 provision.go:87] duration metric: took 415.087005ms to configureAuth
	I0920 19:37:11.042886  786638 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:37:11.043060  786638 config.go:182] Loaded profile config "test-preload-232055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0920 19:37:11.043133  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHHostname
	I0920 19:37:11.045383  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:11.045720  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:11.045755  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:11.045883  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHPort
	I0920 19:37:11.046119  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:11.046280  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:11.046385  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHUsername
	I0920 19:37:11.046522  786638 main.go:141] libmachine: Using SSH client type: native
	I0920 19:37:11.046690  786638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0920 19:37:11.046706  786638 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:37:11.279968  786638 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:37:11.280001  786638 machine.go:96] duration metric: took 1.024690601s to provisionDockerMachine
	I0920 19:37:11.280013  786638 start.go:293] postStartSetup for "test-preload-232055" (driver="kvm2")
	I0920 19:37:11.280026  786638 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:37:11.280056  786638 main.go:141] libmachine: (test-preload-232055) Calling .DriverName
	I0920 19:37:11.280410  786638 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:37:11.280456  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHHostname
	I0920 19:37:11.283041  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:11.283334  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:11.283367  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:11.283446  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHPort
	I0920 19:37:11.283765  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:11.283942  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHUsername
	I0920 19:37:11.284125  786638 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/test-preload-232055/id_rsa Username:docker}
	I0920 19:37:11.369748  786638 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:37:11.373840  786638 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:37:11.373872  786638 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 19:37:11.373958  786638 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 19:37:11.374050  786638 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 19:37:11.374174  786638 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:37:11.383716  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 19:37:11.406840  786638 start.go:296] duration metric: took 126.810604ms for postStartSetup
	I0920 19:37:11.406893  786638 fix.go:56] duration metric: took 20.189258674s for fixHost
	I0920 19:37:11.406916  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHHostname
	I0920 19:37:11.409733  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:11.410086  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:11.410109  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:11.410244  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHPort
	I0920 19:37:11.410455  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:11.410663  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:11.410914  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHUsername
	I0920 19:37:11.411177  786638 main.go:141] libmachine: Using SSH client type: native
	I0920 19:37:11.411351  786638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0920 19:37:11.411363  786638 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:37:11.523524  786638 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726861031.498297590
	
	I0920 19:37:11.523554  786638 fix.go:216] guest clock: 1726861031.498297590
	I0920 19:37:11.523562  786638 fix.go:229] Guest: 2024-09-20 19:37:11.49829759 +0000 UTC Remote: 2024-09-20 19:37:11.406897599 +0000 UTC m=+24.387341150 (delta=91.399991ms)
	I0920 19:37:11.523583  786638 fix.go:200] guest clock delta is within tolerance: 91.399991ms
	I0920 19:37:11.523600  786638 start.go:83] releasing machines lock for "test-preload-232055", held for 20.305969912s
	I0920 19:37:11.523627  786638 main.go:141] libmachine: (test-preload-232055) Calling .DriverName
	I0920 19:37:11.523948  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetIP
	I0920 19:37:11.526472  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:11.526790  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:11.526827  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:11.526937  786638 main.go:141] libmachine: (test-preload-232055) Calling .DriverName
	I0920 19:37:11.527413  786638 main.go:141] libmachine: (test-preload-232055) Calling .DriverName
	I0920 19:37:11.527612  786638 main.go:141] libmachine: (test-preload-232055) Calling .DriverName
	I0920 19:37:11.527727  786638 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:37:11.527770  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHHostname
	I0920 19:37:11.527816  786638 ssh_runner.go:195] Run: cat /version.json
	I0920 19:37:11.527840  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHHostname
	I0920 19:37:11.530115  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:11.530374  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:11.530406  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:11.530504  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHPort
	I0920 19:37:11.530665  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:11.530679  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:11.530868  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHUsername
	I0920 19:37:11.531026  786638 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/test-preload-232055/id_rsa Username:docker}
	I0920 19:37:11.531065  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:11.531096  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:11.531270  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHPort
	I0920 19:37:11.531414  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:11.531551  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHUsername
	I0920 19:37:11.531671  786638 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/test-preload-232055/id_rsa Username:docker}
	I0920 19:37:11.637720  786638 ssh_runner.go:195] Run: systemctl --version
	I0920 19:37:11.643575  786638 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:37:11.783116  786638 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:37:11.789982  786638 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:37:11.790049  786638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:37:11.805281  786638 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:37:11.805308  786638 start.go:495] detecting cgroup driver to use...
	I0920 19:37:11.805386  786638 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:37:11.822012  786638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:37:11.835817  786638 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:37:11.835883  786638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:37:11.849455  786638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:37:11.864184  786638 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:37:11.983518  786638 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:37:12.121889  786638 docker.go:233] disabling docker service ...
	I0920 19:37:12.121986  786638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:37:12.137755  786638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:37:12.150672  786638 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:37:12.288811  786638 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:37:12.399329  786638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:37:12.413127  786638 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:37:12.430914  786638 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0920 19:37:12.430978  786638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:37:12.440825  786638 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:37:12.440886  786638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:37:12.450709  786638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:37:12.460549  786638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:37:12.470235  786638 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:37:12.480299  786638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:37:12.490740  786638 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:37:12.507198  786638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:37:12.517056  786638 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:37:12.525703  786638 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:37:12.525753  786638 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:37:12.537695  786638 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:37:12.547573  786638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:37:12.659364  786638 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:37:12.748771  786638 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:37:12.748857  786638 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:37:12.753508  786638 start.go:563] Will wait 60s for crictl version
	I0920 19:37:12.753562  786638 ssh_runner.go:195] Run: which crictl
	I0920 19:37:12.757308  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:37:12.797309  786638 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:37:12.797399  786638 ssh_runner.go:195] Run: crio --version
	I0920 19:37:12.824680  786638 ssh_runner.go:195] Run: crio --version
	I0920 19:37:12.854626  786638 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0920 19:37:12.856049  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetIP
	I0920 19:37:12.858917  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:12.859275  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:12.859304  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:12.859564  786638 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 19:37:12.863735  786638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:37:12.875820  786638 kubeadm.go:883] updating cluster {Name:test-preload-232055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-232055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:37:12.875955  786638 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0920 19:37:12.875999  786638 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:37:12.910186  786638 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0920 19:37:12.910255  786638 ssh_runner.go:195] Run: which lz4
	I0920 19:37:12.914091  786638 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:37:12.918111  786638 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:37:12.918149  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0920 19:37:14.445082  786638 crio.go:462] duration metric: took 1.531011445s to copy over tarball
	I0920 19:37:14.445159  786638 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:37:16.791387  786638 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.346196687s)
	I0920 19:37:16.791418  786638 crio.go:469] duration metric: took 2.34630809s to extract the tarball
	I0920 19:37:16.791426  786638 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:37:16.832217  786638 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:37:16.873816  786638 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0920 19:37:16.873842  786638 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:37:16.873900  786638 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:37:16.873955  786638 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0920 19:37:16.873973  786638 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 19:37:16.873997  786638 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0920 19:37:16.874020  786638 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 19:37:16.873977  786638 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 19:37:16.874027  786638 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 19:37:16.874043  786638 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0920 19:37:16.875517  786638 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 19:37:16.875538  786638 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0920 19:37:16.875521  786638 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0920 19:37:16.875521  786638 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 19:37:16.875596  786638 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0920 19:37:16.875519  786638 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 19:37:16.875527  786638 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 19:37:16.875527  786638 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:37:17.033231  786638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 19:37:17.039538  786638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0920 19:37:17.043696  786638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0920 19:37:17.050362  786638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0920 19:37:17.064419  786638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0920 19:37:17.068859  786638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0920 19:37:17.100508  786638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0920 19:37:17.130468  786638 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0920 19:37:17.130522  786638 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 19:37:17.130584  786638 ssh_runner.go:195] Run: which crictl
	I0920 19:37:17.130609  786638 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0920 19:37:17.130648  786638 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 19:37:17.130695  786638 ssh_runner.go:195] Run: which crictl
	I0920 19:37:17.187162  786638 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0920 19:37:17.187225  786638 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0920 19:37:17.187278  786638 ssh_runner.go:195] Run: which crictl
	I0920 19:37:17.210621  786638 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0920 19:37:17.210684  786638 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0920 19:37:17.210718  786638 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0920 19:37:17.210768  786638 ssh_runner.go:195] Run: which crictl
	I0920 19:37:17.210774  786638 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0920 19:37:17.210801  786638 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0920 19:37:17.210689  786638 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0920 19:37:17.210873  786638 ssh_runner.go:195] Run: which crictl
	I0920 19:37:17.210875  786638 ssh_runner.go:195] Run: which crictl
	I0920 19:37:17.212185  786638 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:37:17.227562  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 19:37:17.227568  786638 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0920 19:37:17.227618  786638 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0920 19:37:17.227625  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 19:37:17.227648  786638 ssh_runner.go:195] Run: which crictl
	I0920 19:37:17.227687  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0920 19:37:17.227740  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0920 19:37:17.227846  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0920 19:37:17.227864  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0920 19:37:17.476454  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 19:37:17.476473  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 19:37:17.476454  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0920 19:37:17.476510  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0920 19:37:17.476546  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0920 19:37:17.476552  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0920 19:37:17.476614  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0920 19:37:17.619610  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0920 19:37:17.622460  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0920 19:37:17.622478  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0920 19:37:17.622563  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0920 19:37:17.622678  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0920 19:37:17.622710  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 19:37:17.622794  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 19:37:17.708858  786638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0920 19:37:17.708999  786638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0920 19:37:17.771819  786638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0920 19:37:17.771947  786638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0920 19:37:17.782832  786638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0920 19:37:17.782922  786638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 19:37:17.782922  786638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0920 19:37:17.782966  786638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0920 19:37:17.783016  786638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0920 19:37:17.783044  786638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0920 19:37:17.783131  786638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0920 19:37:17.783204  786638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0920 19:37:17.783208  786638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0920 19:37:17.783223  786638 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0920 19:37:17.783206  786638 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0920 19:37:17.783260  786638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0920 19:37:17.783263  786638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0920 19:37:17.787662  786638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0920 19:37:17.797626  786638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0920 19:37:17.798026  786638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0920 19:37:17.827399  786638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0920 19:37:17.827477  786638 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0920 19:37:17.827601  786638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0920 19:37:20.463589  786638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.680296349s)
	I0920 19:37:20.463629  786638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0920 19:37:20.463667  786638 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0920 19:37:20.463741  786638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0920 19:37:20.463674  786638 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.636049155s)
	I0920 19:37:20.463798  786638 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0920 19:37:20.609641  786638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0920 19:37:20.609703  786638 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0920 19:37:20.609776  786638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0920 19:37:21.059421  786638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0920 19:37:21.059485  786638 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0920 19:37:21.059542  786638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0920 19:37:21.399304  786638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0920 19:37:21.399353  786638 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0920 19:37:21.399412  786638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0920 19:37:23.647308  786638 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.247866407s)
	I0920 19:37:23.647350  786638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0920 19:37:23.647381  786638 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0920 19:37:23.647432  786638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0920 19:37:24.392127  786638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0920 19:37:24.392177  786638 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0920 19:37:24.392228  786638 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0920 19:37:25.236756  786638 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0920 19:37:25.236824  786638 cache_images.go:123] Successfully loaded all cached images
	I0920 19:37:25.236832  786638 cache_images.go:92] duration metric: took 8.362976223s to LoadCachedImages
	I0920 19:37:25.236848  786638 kubeadm.go:934] updating node { 192.168.39.234 8443 v1.24.4 crio true true} ...
	I0920 19:37:25.236978  786638 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-232055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-232055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:37:25.237073  786638 ssh_runner.go:195] Run: crio config
	I0920 19:37:25.283461  786638 cni.go:84] Creating CNI manager for ""
	I0920 19:37:25.283490  786638 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:37:25.283505  786638 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:37:25.283530  786638 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-232055 NodeName:test-preload-232055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:37:25.283732  786638 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-232055"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:37:25.283829  786638 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0920 19:37:25.294259  786638 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:37:25.294325  786638 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:37:25.303651  786638 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0920 19:37:25.319942  786638 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:37:25.336472  786638 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0920 19:37:25.353420  786638 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I0920 19:37:25.357199  786638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:37:25.369197  786638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:37:25.487004  786638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:37:25.505097  786638 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055 for IP: 192.168.39.234
	I0920 19:37:25.505128  786638 certs.go:194] generating shared ca certs ...
	I0920 19:37:25.505152  786638 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:37:25.505348  786638 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 19:37:25.505427  786638 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 19:37:25.505458  786638 certs.go:256] generating profile certs ...
	I0920 19:37:25.505569  786638 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/client.key
	I0920 19:37:25.505660  786638 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/apiserver.key.e117b990
	I0920 19:37:25.505720  786638 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/proxy-client.key
	I0920 19:37:25.505893  786638 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 19:37:25.505937  786638 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 19:37:25.505948  786638 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:37:25.505970  786638 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:37:25.506017  786638 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:37:25.506062  786638 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 19:37:25.506116  786638 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 19:37:25.506872  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:37:25.538573  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:37:25.571022  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:37:25.613271  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:37:25.639970  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 19:37:25.666107  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:37:25.696848  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:37:25.723399  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 19:37:25.758221  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 19:37:25.780931  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:37:25.803756  786638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 19:37:25.826309  786638 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:37:25.842634  786638 ssh_runner.go:195] Run: openssl version
	I0920 19:37:25.848455  786638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 19:37:25.859099  786638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 19:37:25.863610  786638 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 19:37:25.863675  786638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 19:37:25.869412  786638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 19:37:25.879839  786638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 19:37:25.890304  786638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 19:37:25.894668  786638 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 19:37:25.894724  786638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 19:37:25.900177  786638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:37:25.910509  786638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:37:25.921001  786638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:37:25.925282  786638 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:37:25.925330  786638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:37:25.930987  786638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:37:25.941467  786638 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:37:25.945881  786638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:37:25.951713  786638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:37:25.957284  786638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:37:25.963176  786638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:37:25.968937  786638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:37:25.974622  786638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:37:25.980135  786638 kubeadm.go:392] StartCluster: {Name:test-preload-232055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-232055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:37:25.980226  786638 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:37:25.980280  786638 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:37:26.017892  786638 cri.go:89] found id: ""
	I0920 19:37:26.017977  786638 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:37:26.028069  786638 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:37:26.028088  786638 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:37:26.028139  786638 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:37:26.037751  786638 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:37:26.038212  786638 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-232055" does not appear in /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 19:37:26.038357  786638 kubeconfig.go:62] /home/jenkins/minikube-integration/19678-739831/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-232055" cluster setting kubeconfig missing "test-preload-232055" context setting]
	I0920 19:37:26.038666  786638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/kubeconfig: {Name:mk275c54cf52b0ccdc22fcaa39c7b9c31092c648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:37:26.039343  786638 kapi.go:59] client config for test-preload-232055: &rest.Config{Host:"https://192.168.39.234:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/client.key", CAFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 19:37:26.040008  786638 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:37:26.049570  786638 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.234
	I0920 19:37:26.049602  786638 kubeadm.go:1160] stopping kube-system containers ...
	I0920 19:37:26.049614  786638 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 19:37:26.049651  786638 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:37:26.085750  786638 cri.go:89] found id: ""
	I0920 19:37:26.085861  786638 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 19:37:26.102672  786638 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:37:26.113677  786638 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:37:26.113704  786638 kubeadm.go:157] found existing configuration files:
	
	I0920 19:37:26.113762  786638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:37:26.122699  786638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:37:26.122758  786638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:37:26.131810  786638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:37:26.140468  786638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:37:26.140518  786638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:37:26.149552  786638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:37:26.158592  786638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:37:26.158634  786638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:37:26.167592  786638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:37:26.176601  786638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:37:26.176657  786638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:37:26.185935  786638 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:37:26.195401  786638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:37:26.281605  786638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:37:26.911619  786638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:37:27.186328  786638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:37:27.261738  786638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:37:27.347478  786638 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:37:27.347579  786638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:37:27.847920  786638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:37:28.347900  786638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:37:28.388611  786638 api_server.go:72] duration metric: took 1.041136629s to wait for apiserver process to appear ...
	I0920 19:37:28.388639  786638 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:37:28.388671  786638 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0920 19:37:28.389220  786638 api_server.go:269] stopped: https://192.168.39.234:8443/healthz: Get "https://192.168.39.234:8443/healthz": dial tcp 192.168.39.234:8443: connect: connection refused
	I0920 19:37:28.888809  786638 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0920 19:37:32.343226  786638 api_server.go:279] https://192.168.39.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:37:32.343260  786638 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:37:32.343278  786638 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0920 19:37:32.375181  786638 api_server.go:279] https://192.168.39.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 19:37:32.375217  786638 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 19:37:32.389400  786638 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0920 19:37:32.435233  786638 api_server.go:279] https://192.168.39.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:37:32.435278  786638 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:37:32.888762  786638 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0920 19:37:32.894227  786638 api_server.go:279] https://192.168.39.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:37:32.894259  786638 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:37:33.388814  786638 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0920 19:37:33.393355  786638 api_server.go:279] https://192.168.39.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:37:33.393385  786638 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:37:33.888881  786638 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0920 19:37:33.893983  786638 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0920 19:37:33.900115  786638 api_server.go:141] control plane version: v1.24.4
	I0920 19:37:33.900146  786638 api_server.go:131] duration metric: took 5.511492392s to wait for apiserver health ...
	I0920 19:37:33.900156  786638 cni.go:84] Creating CNI manager for ""
	I0920 19:37:33.900162  786638 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:37:33.901908  786638 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 19:37:33.903107  786638 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 19:37:33.914341  786638 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 19:37:33.965407  786638 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:37:33.965510  786638 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 19:37:33.965529  786638 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 19:37:33.975539  786638 system_pods.go:59] 7 kube-system pods found
	I0920 19:37:33.975579  786638 system_pods.go:61] "coredns-6d4b75cb6d-hxjzb" [896e3237-03bb-4ca1-8bf9-62f356f7131a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:37:33.975588  786638 system_pods.go:61] "etcd-test-preload-232055" [acfb9715-5154-4c99-b5c6-e44ea139a7e0] Running
	I0920 19:37:33.975594  786638 system_pods.go:61] "kube-apiserver-test-preload-232055" [4666930a-f715-42e5-aac8-d9076be7f547] Running
	I0920 19:37:33.975601  786638 system_pods.go:61] "kube-controller-manager-test-preload-232055" [8f006137-b60e-4cf0-b02d-611b8cdf38a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:37:33.975615  786638 system_pods.go:61] "kube-proxy-5q9tw" [c857a9f8-a3e8-422d-bfc4-908564e3c0c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 19:37:33.975622  786638 system_pods.go:61] "kube-scheduler-test-preload-232055" [6739376b-aeb0-46f9-a08f-37549421af03] Running
	I0920 19:37:33.975630  786638 system_pods.go:61] "storage-provisioner" [df0687ac-fa2e-45b0-951c-af46eeb7b2b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 19:37:33.975639  786638 system_pods.go:74] duration metric: took 10.206988ms to wait for pod list to return data ...
	I0920 19:37:33.975658  786638 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:37:33.978752  786638 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:37:33.978775  786638 node_conditions.go:123] node cpu capacity is 2
	I0920 19:37:33.978786  786638 node_conditions.go:105] duration metric: took 3.120217ms to run NodePressure ...
	I0920 19:37:33.978803  786638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 19:37:34.210329  786638 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 19:37:34.214795  786638 kubeadm.go:739] kubelet initialised
	I0920 19:37:34.214817  786638 kubeadm.go:740] duration metric: took 4.46148ms waiting for restarted kubelet to initialise ...
	I0920 19:37:34.214825  786638 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:37:34.225516  786638 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-hxjzb" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:34.231630  786638 pod_ready.go:98] node "test-preload-232055" hosting pod "coredns-6d4b75cb6d-hxjzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:34.231655  786638 pod_ready.go:82] duration metric: took 6.114762ms for pod "coredns-6d4b75cb6d-hxjzb" in "kube-system" namespace to be "Ready" ...
	E0920 19:37:34.231666  786638 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-232055" hosting pod "coredns-6d4b75cb6d-hxjzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:34.231674  786638 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:34.239928  786638 pod_ready.go:98] node "test-preload-232055" hosting pod "etcd-test-preload-232055" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:34.239952  786638 pod_ready.go:82] duration metric: took 8.265507ms for pod "etcd-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	E0920 19:37:34.239963  786638 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-232055" hosting pod "etcd-test-preload-232055" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:34.239971  786638 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:34.245701  786638 pod_ready.go:98] node "test-preload-232055" hosting pod "kube-apiserver-test-preload-232055" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:34.245730  786638 pod_ready.go:82] duration metric: took 5.746736ms for pod "kube-apiserver-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	E0920 19:37:34.245739  786638 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-232055" hosting pod "kube-apiserver-test-preload-232055" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:34.245747  786638 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:34.371714  786638 pod_ready.go:98] node "test-preload-232055" hosting pod "kube-controller-manager-test-preload-232055" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:34.371744  786638 pod_ready.go:82] duration metric: took 125.987746ms for pod "kube-controller-manager-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	E0920 19:37:34.371754  786638 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-232055" hosting pod "kube-controller-manager-test-preload-232055" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:34.371760  786638 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5q9tw" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:34.769322  786638 pod_ready.go:98] node "test-preload-232055" hosting pod "kube-proxy-5q9tw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:34.769358  786638 pod_ready.go:82] duration metric: took 397.58773ms for pod "kube-proxy-5q9tw" in "kube-system" namespace to be "Ready" ...
	E0920 19:37:34.769370  786638 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-232055" hosting pod "kube-proxy-5q9tw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:34.769377  786638 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:35.168869  786638 pod_ready.go:98] node "test-preload-232055" hosting pod "kube-scheduler-test-preload-232055" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:35.168906  786638 pod_ready.go:82] duration metric: took 399.520718ms for pod "kube-scheduler-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	E0920 19:37:35.168918  786638 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-232055" hosting pod "kube-scheduler-test-preload-232055" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:35.168926  786638 pod_ready.go:39] duration metric: took 954.092539ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:37:35.168960  786638 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:37:35.181023  786638 ops.go:34] apiserver oom_adj: -16
	I0920 19:37:35.181044  786638 kubeadm.go:597] duration metric: took 9.152950871s to restartPrimaryControlPlane
	I0920 19:37:35.181051  786638 kubeadm.go:394] duration metric: took 9.200924543s to StartCluster
	I0920 19:37:35.181068  786638 settings.go:142] acquiring lock: {Name:mk0bd1e421bf437575c076c52c1ff2f74497a1ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:37:35.181137  786638 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 19:37:35.181754  786638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/kubeconfig: {Name:mk275c54cf52b0ccdc22fcaa39c7b9c31092c648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:37:35.182008  786638 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:37:35.182133  786638 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:37:35.182225  786638 config.go:182] Loaded profile config "test-preload-232055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0920 19:37:35.182242  786638 addons.go:69] Setting storage-provisioner=true in profile "test-preload-232055"
	I0920 19:37:35.182263  786638 addons.go:234] Setting addon storage-provisioner=true in "test-preload-232055"
	I0920 19:37:35.182263  786638 addons.go:69] Setting default-storageclass=true in profile "test-preload-232055"
	W0920 19:37:35.182272  786638 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:37:35.182277  786638 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-232055"
	I0920 19:37:35.182304  786638 host.go:66] Checking if "test-preload-232055" exists ...
	I0920 19:37:35.182604  786638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:37:35.182650  786638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:37:35.182749  786638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:37:35.182793  786638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:37:35.183867  786638 out.go:177] * Verifying Kubernetes components...
	I0920 19:37:35.185379  786638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:37:35.198707  786638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0920 19:37:35.198990  786638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0920 19:37:35.199199  786638 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:37:35.199477  786638 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:37:35.199702  786638 main.go:141] libmachine: Using API Version  1
	I0920 19:37:35.199721  786638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:37:35.199999  786638 main.go:141] libmachine: Using API Version  1
	I0920 19:37:35.200037  786638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:37:35.200092  786638 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:37:35.200292  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetState
	I0920 19:37:35.200403  786638 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:37:35.200982  786638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:37:35.201030  786638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:37:35.202990  786638 kapi.go:59] client config for test-preload-232055: &rest.Config{Host:"https://192.168.39.234:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/profiles/test-preload-232055/client.key", CAFile:"/home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 19:37:35.203339  786638 addons.go:234] Setting addon default-storageclass=true in "test-preload-232055"
	W0920 19:37:35.203359  786638 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:37:35.203387  786638 host.go:66] Checking if "test-preload-232055" exists ...
	I0920 19:37:35.203747  786638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:37:35.203797  786638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:37:35.215986  786638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I0920 19:37:35.216491  786638 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:37:35.217023  786638 main.go:141] libmachine: Using API Version  1
	I0920 19:37:35.217048  786638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:37:35.217348  786638 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:37:35.217556  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetState
	I0920 19:37:35.219346  786638 main.go:141] libmachine: (test-preload-232055) Calling .DriverName
	I0920 19:37:35.221513  786638 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:37:35.222455  786638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34975
	I0920 19:37:35.222813  786638 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:37:35.222978  786638 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:37:35.223000  786638 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:37:35.223020  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHHostname
	I0920 19:37:35.223292  786638 main.go:141] libmachine: Using API Version  1
	I0920 19:37:35.223313  786638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:37:35.223655  786638 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:37:35.224141  786638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:37:35.224179  786638 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:37:35.226029  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:35.226450  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:35.226484  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:35.226657  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHPort
	I0920 19:37:35.226830  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:35.227011  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHUsername
	I0920 19:37:35.227168  786638 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/test-preload-232055/id_rsa Username:docker}
	I0920 19:37:35.278501  786638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0920 19:37:35.279058  786638 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:37:35.279503  786638 main.go:141] libmachine: Using API Version  1
	I0920 19:37:35.279523  786638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:37:35.279925  786638 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:37:35.280125  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetState
	I0920 19:37:35.281595  786638 main.go:141] libmachine: (test-preload-232055) Calling .DriverName
	I0920 19:37:35.281822  786638 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:37:35.281839  786638 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:37:35.281858  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHHostname
	I0920 19:37:35.284597  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:35.285037  786638 main.go:141] libmachine: (test-preload-232055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:04:df", ip: ""} in network mk-test-preload-232055: {Iface:virbr1 ExpiryTime:2024-09-20 20:37:02 +0000 UTC Type:0 Mac:52:54:00:bf:04:df Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-232055 Clientid:01:52:54:00:bf:04:df}
	I0920 19:37:35.285067  786638 main.go:141] libmachine: (test-preload-232055) DBG | domain test-preload-232055 has defined IP address 192.168.39.234 and MAC address 52:54:00:bf:04:df in network mk-test-preload-232055
	I0920 19:37:35.285217  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHPort
	I0920 19:37:35.285398  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHKeyPath
	I0920 19:37:35.285545  786638 main.go:141] libmachine: (test-preload-232055) Calling .GetSSHUsername
	I0920 19:37:35.285693  786638 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/test-preload-232055/id_rsa Username:docker}
	I0920 19:37:35.374627  786638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:37:35.392370  786638 node_ready.go:35] waiting up to 6m0s for node "test-preload-232055" to be "Ready" ...
	I0920 19:37:35.472632  786638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:37:35.505446  786638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:37:36.385971  786638 main.go:141] libmachine: Making call to close driver server
	I0920 19:37:36.385994  786638 main.go:141] libmachine: (test-preload-232055) Calling .Close
	I0920 19:37:36.386302  786638 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:37:36.386318  786638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:37:36.386332  786638 main.go:141] libmachine: Making call to close driver server
	I0920 19:37:36.386340  786638 main.go:141] libmachine: (test-preload-232055) Calling .Close
	I0920 19:37:36.386597  786638 main.go:141] libmachine: (test-preload-232055) DBG | Closing plugin on server side
	I0920 19:37:36.386659  786638 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:37:36.386681  786638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:37:36.392643  786638 main.go:141] libmachine: Making call to close driver server
	I0920 19:37:36.392663  786638 main.go:141] libmachine: (test-preload-232055) Calling .Close
	I0920 19:37:36.392938  786638 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:37:36.392955  786638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:37:36.413649  786638 main.go:141] libmachine: Making call to close driver server
	I0920 19:37:36.413672  786638 main.go:141] libmachine: (test-preload-232055) Calling .Close
	I0920 19:37:36.413967  786638 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:37:36.414001  786638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:37:36.414010  786638 main.go:141] libmachine: Making call to close driver server
	I0920 19:37:36.414020  786638 main.go:141] libmachine: (test-preload-232055) Calling .Close
	I0920 19:37:36.414259  786638 main.go:141] libmachine: (test-preload-232055) DBG | Closing plugin on server side
	I0920 19:37:36.414268  786638 main.go:141] libmachine: Successfully made call to close driver server
	I0920 19:37:36.414280  786638 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 19:37:36.416077  786638 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0920 19:37:36.417253  786638 addons.go:510] duration metric: took 1.235130249s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0920 19:37:37.401947  786638 node_ready.go:53] node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:39.895968  786638 node_ready.go:53] node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:41.898029  786638 node_ready.go:53] node "test-preload-232055" has status "Ready":"False"
	I0920 19:37:42.897169  786638 node_ready.go:49] node "test-preload-232055" has status "Ready":"True"
	I0920 19:37:42.897193  786638 node_ready.go:38] duration metric: took 7.504790788s for node "test-preload-232055" to be "Ready" ...
	I0920 19:37:42.897203  786638 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:37:42.902620  786638 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-hxjzb" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:42.907563  786638 pod_ready.go:93] pod "coredns-6d4b75cb6d-hxjzb" in "kube-system" namespace has status "Ready":"True"
	I0920 19:37:42.907590  786638 pod_ready.go:82] duration metric: took 4.945405ms for pod "coredns-6d4b75cb6d-hxjzb" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:42.907598  786638 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:42.912025  786638 pod_ready.go:93] pod "etcd-test-preload-232055" in "kube-system" namespace has status "Ready":"True"
	I0920 19:37:42.912050  786638 pod_ready.go:82] duration metric: took 4.444373ms for pod "etcd-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:42.912060  786638 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:43.418924  786638 pod_ready.go:93] pod "kube-apiserver-test-preload-232055" in "kube-system" namespace has status "Ready":"True"
	I0920 19:37:43.418948  786638 pod_ready.go:82] duration metric: took 506.880744ms for pod "kube-apiserver-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:43.418958  786638 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:43.426135  786638 pod_ready.go:93] pod "kube-controller-manager-test-preload-232055" in "kube-system" namespace has status "Ready":"True"
	I0920 19:37:43.426152  786638 pod_ready.go:82] duration metric: took 7.188617ms for pod "kube-controller-manager-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:43.426160  786638 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5q9tw" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:43.699908  786638 pod_ready.go:93] pod "kube-proxy-5q9tw" in "kube-system" namespace has status "Ready":"True"
	I0920 19:37:43.699930  786638 pod_ready.go:82] duration metric: took 273.764034ms for pod "kube-proxy-5q9tw" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:43.699939  786638 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:44.097575  786638 pod_ready.go:93] pod "kube-scheduler-test-preload-232055" in "kube-system" namespace has status "Ready":"True"
	I0920 19:37:44.097602  786638 pod_ready.go:82] duration metric: took 397.655713ms for pod "kube-scheduler-test-preload-232055" in "kube-system" namespace to be "Ready" ...
	I0920 19:37:44.097612  786638 pod_ready.go:39] duration metric: took 1.200400605s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:37:44.097627  786638 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:37:44.097686  786638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:37:44.113682  786638 api_server.go:72] duration metric: took 8.931628697s to wait for apiserver process to appear ...
	I0920 19:37:44.113725  786638 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:37:44.113751  786638 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0920 19:37:44.118779  786638 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0920 19:37:44.119663  786638 api_server.go:141] control plane version: v1.24.4
	I0920 19:37:44.119683  786638 api_server.go:131] duration metric: took 5.951915ms to wait for apiserver health ...
	I0920 19:37:44.119691  786638 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:37:44.301458  786638 system_pods.go:59] 7 kube-system pods found
	I0920 19:37:44.301488  786638 system_pods.go:61] "coredns-6d4b75cb6d-hxjzb" [896e3237-03bb-4ca1-8bf9-62f356f7131a] Running
	I0920 19:37:44.301492  786638 system_pods.go:61] "etcd-test-preload-232055" [acfb9715-5154-4c99-b5c6-e44ea139a7e0] Running
	I0920 19:37:44.301496  786638 system_pods.go:61] "kube-apiserver-test-preload-232055" [4666930a-f715-42e5-aac8-d9076be7f547] Running
	I0920 19:37:44.301506  786638 system_pods.go:61] "kube-controller-manager-test-preload-232055" [8f006137-b60e-4cf0-b02d-611b8cdf38a3] Running
	I0920 19:37:44.301509  786638 system_pods.go:61] "kube-proxy-5q9tw" [c857a9f8-a3e8-422d-bfc4-908564e3c0c3] Running
	I0920 19:37:44.301512  786638 system_pods.go:61] "kube-scheduler-test-preload-232055" [6739376b-aeb0-46f9-a08f-37549421af03] Running
	I0920 19:37:44.301514  786638 system_pods.go:61] "storage-provisioner" [df0687ac-fa2e-45b0-951c-af46eeb7b2b6] Running
	I0920 19:37:44.301520  786638 system_pods.go:74] duration metric: took 181.823469ms to wait for pod list to return data ...
	I0920 19:37:44.301528  786638 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:37:44.497172  786638 default_sa.go:45] found service account: "default"
	I0920 19:37:44.497198  786638 default_sa.go:55] duration metric: took 195.664768ms for default service account to be created ...
	I0920 19:37:44.497206  786638 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:37:44.699490  786638 system_pods.go:86] 7 kube-system pods found
	I0920 19:37:44.699519  786638 system_pods.go:89] "coredns-6d4b75cb6d-hxjzb" [896e3237-03bb-4ca1-8bf9-62f356f7131a] Running
	I0920 19:37:44.699525  786638 system_pods.go:89] "etcd-test-preload-232055" [acfb9715-5154-4c99-b5c6-e44ea139a7e0] Running
	I0920 19:37:44.699528  786638 system_pods.go:89] "kube-apiserver-test-preload-232055" [4666930a-f715-42e5-aac8-d9076be7f547] Running
	I0920 19:37:44.699532  786638 system_pods.go:89] "kube-controller-manager-test-preload-232055" [8f006137-b60e-4cf0-b02d-611b8cdf38a3] Running
	I0920 19:37:44.699541  786638 system_pods.go:89] "kube-proxy-5q9tw" [c857a9f8-a3e8-422d-bfc4-908564e3c0c3] Running
	I0920 19:37:44.699547  786638 system_pods.go:89] "kube-scheduler-test-preload-232055" [6739376b-aeb0-46f9-a08f-37549421af03] Running
	I0920 19:37:44.699551  786638 system_pods.go:89] "storage-provisioner" [df0687ac-fa2e-45b0-951c-af46eeb7b2b6] Running
	I0920 19:37:44.699570  786638 system_pods.go:126] duration metric: took 202.356735ms to wait for k8s-apps to be running ...
	I0920 19:37:44.699583  786638 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:37:44.699638  786638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:37:44.716858  786638 system_svc.go:56] duration metric: took 17.261272ms WaitForService to wait for kubelet
	I0920 19:37:44.716895  786638 kubeadm.go:582] duration metric: took 9.534848982s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:37:44.716919  786638 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:37:44.901207  786638 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:37:44.901238  786638 node_conditions.go:123] node cpu capacity is 2
	I0920 19:37:44.901252  786638 node_conditions.go:105] duration metric: took 184.326578ms to run NodePressure ...
	I0920 19:37:44.901267  786638 start.go:241] waiting for startup goroutines ...
	I0920 19:37:44.901277  786638 start.go:246] waiting for cluster config update ...
	I0920 19:37:44.901290  786638 start.go:255] writing updated cluster config ...
	I0920 19:37:44.901612  786638 ssh_runner.go:195] Run: rm -f paused
	I0920 19:37:44.952320  786638 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I0920 19:37:44.954408  786638 out.go:201] 
	W0920 19:37:44.955638  786638 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0920 19:37:44.956953  786638 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0920 19:37:44.958235  786638 out.go:177] * Done! kubectl is now configured to use "test-preload-232055" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.862546800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861065862519073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae3a13c0-f3c9-40c5-84cd-f089bdeae768 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.863152235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=916d9af2-52d4-4f54-a2eb-a48c330ffca8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.863200277Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=916d9af2-52d4-4f54-a2eb-a48c330ffca8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.863350438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db2de46e8d85048c4604184f0fc87fb4b512b7d3ddd3e29c321007851adbfaab,PodSandboxId:792634a30a926620bedd6309d3874e44a2f6ee3bab2d3b1b91759eae9d1c67df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726861061285880178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hxjzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896e3237-03bb-4ca1-8bf9-62f356f7131a,},Annotations:map[string]string{io.kubernetes.container.hash: 568d774,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2bfbd8e13779d38d9b1bbf26fe5dccda22b93ee9d1eb43808f307a7a7b9ddd,PodSandboxId:a4c84318bee0b3a144ff7a6432bc5e21526631a78e40474e75d23524dfcefcc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726861054302355803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5q9tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c857a9f8-a3e8-422d-bfc4-908564e3c0c3,},Annotations:map[string]string{io.kubernetes.container.hash: b1fe8cb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e72c8731622d7b88e3e9adbf9799c6b3c69d6ac31a2cb9b9271beda2502faf,PodSandboxId:8311d3985d0b3d8ac14f2cc4febb0c9c79c190d8a0417c8e94b5144e36350873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726861054042240843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df0
687ac-fa2e-45b0-951c-af46eeb7b2b6,},Annotations:map[string]string{io.kubernetes.container.hash: 386e0b84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d369e11dfab4fbd0b5537c75647f30622982c6e501e73b9fa576990a85d07034,PodSandboxId:1dacaeb932c6f741a4ef833635b9539b86c9fa9531f8273578177e86d78d5a7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726861048121716956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0391297ee079fe83d6dd82b4755e119f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 32e37a0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013b9d8a05bcd16d3a0f35741e5352ddaa5b9898b742cc7610131cc5949640bc,PodSandboxId:f36ded3b66cf06940a9ba64b5edfa309572f5f9bac248f36bc1d50dca4ee06d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726861048082788951,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fe22eb52a305302132c460b16474bc2,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16b7210b4422289f200e1ac4b007ebabe990747072a9169b54be6feb61c1173,PodSandboxId:5ce211516736ff4fd4017aee55318cfa4c0619e244d4d96720defbab621cb05e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726861048092917057,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a38431de93158b11cc9fdbcf3e7f733a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 525081ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b46c6c9276d05eeb5007760a64b30f1c3a90beba579caec053da4214b8e3856f,PodSandboxId:5cf9e101bd73470748a567212858bbb1623d03561c82e4ac02669779c67b2c6d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726861048067871023,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a706f3e661265518a213052058806114,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=916d9af2-52d4-4f54-a2eb-a48c330ffca8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.899166182Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e3e16bf-d3d5-4499-b14e-b946c1ff5609 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.899234943Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e3e16bf-d3d5-4499-b14e-b946c1ff5609 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.901025135Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef81a909-06b5-4f1f-a5f8-a4b363330db5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.901545104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861065901521007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef81a909-06b5-4f1f-a5f8-a4b363330db5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.902101308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c77ab6e-220e-44d9-9dd6-f5aaeb44484c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.902150933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c77ab6e-220e-44d9-9dd6-f5aaeb44484c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.902522090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db2de46e8d85048c4604184f0fc87fb4b512b7d3ddd3e29c321007851adbfaab,PodSandboxId:792634a30a926620bedd6309d3874e44a2f6ee3bab2d3b1b91759eae9d1c67df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726861061285880178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hxjzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896e3237-03bb-4ca1-8bf9-62f356f7131a,},Annotations:map[string]string{io.kubernetes.container.hash: 568d774,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2bfbd8e13779d38d9b1bbf26fe5dccda22b93ee9d1eb43808f307a7a7b9ddd,PodSandboxId:a4c84318bee0b3a144ff7a6432bc5e21526631a78e40474e75d23524dfcefcc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726861054302355803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5q9tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c857a9f8-a3e8-422d-bfc4-908564e3c0c3,},Annotations:map[string]string{io.kubernetes.container.hash: b1fe8cb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e72c8731622d7b88e3e9adbf9799c6b3c69d6ac31a2cb9b9271beda2502faf,PodSandboxId:8311d3985d0b3d8ac14f2cc4febb0c9c79c190d8a0417c8e94b5144e36350873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726861054042240843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df0
687ac-fa2e-45b0-951c-af46eeb7b2b6,},Annotations:map[string]string{io.kubernetes.container.hash: 386e0b84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d369e11dfab4fbd0b5537c75647f30622982c6e501e73b9fa576990a85d07034,PodSandboxId:1dacaeb932c6f741a4ef833635b9539b86c9fa9531f8273578177e86d78d5a7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726861048121716956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0391297ee079fe83d6dd82b4755e119f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 32e37a0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013b9d8a05bcd16d3a0f35741e5352ddaa5b9898b742cc7610131cc5949640bc,PodSandboxId:f36ded3b66cf06940a9ba64b5edfa309572f5f9bac248f36bc1d50dca4ee06d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726861048082788951,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fe22eb52a305302132c460b16474bc2,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16b7210b4422289f200e1ac4b007ebabe990747072a9169b54be6feb61c1173,PodSandboxId:5ce211516736ff4fd4017aee55318cfa4c0619e244d4d96720defbab621cb05e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726861048092917057,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a38431de93158b11cc9fdbcf3e7f733a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 525081ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b46c6c9276d05eeb5007760a64b30f1c3a90beba579caec053da4214b8e3856f,PodSandboxId:5cf9e101bd73470748a567212858bbb1623d03561c82e4ac02669779c67b2c6d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726861048067871023,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a706f3e661265518a213052058806114,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c77ab6e-220e-44d9-9dd6-f5aaeb44484c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.940029806Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f262f721-4b18-4af3-83ec-5c19363b9f5f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.940096664Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f262f721-4b18-4af3-83ec-5c19363b9f5f name=/runtime.v1.RuntimeService/Version
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.941526058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d811d16-ab6c-48ec-ac1c-e4df9a235536 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.941933220Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861065941913708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d811d16-ab6c-48ec-ac1c-e4df9a235536 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.942594490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b653181-c9c9-449f-ae57-c28b4e0296fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.942640426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b653181-c9c9-449f-ae57-c28b4e0296fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.942789437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db2de46e8d85048c4604184f0fc87fb4b512b7d3ddd3e29c321007851adbfaab,PodSandboxId:792634a30a926620bedd6309d3874e44a2f6ee3bab2d3b1b91759eae9d1c67df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726861061285880178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hxjzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896e3237-03bb-4ca1-8bf9-62f356f7131a,},Annotations:map[string]string{io.kubernetes.container.hash: 568d774,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2bfbd8e13779d38d9b1bbf26fe5dccda22b93ee9d1eb43808f307a7a7b9ddd,PodSandboxId:a4c84318bee0b3a144ff7a6432bc5e21526631a78e40474e75d23524dfcefcc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726861054302355803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5q9tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c857a9f8-a3e8-422d-bfc4-908564e3c0c3,},Annotations:map[string]string{io.kubernetes.container.hash: b1fe8cb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e72c8731622d7b88e3e9adbf9799c6b3c69d6ac31a2cb9b9271beda2502faf,PodSandboxId:8311d3985d0b3d8ac14f2cc4febb0c9c79c190d8a0417c8e94b5144e36350873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726861054042240843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df0
687ac-fa2e-45b0-951c-af46eeb7b2b6,},Annotations:map[string]string{io.kubernetes.container.hash: 386e0b84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d369e11dfab4fbd0b5537c75647f30622982c6e501e73b9fa576990a85d07034,PodSandboxId:1dacaeb932c6f741a4ef833635b9539b86c9fa9531f8273578177e86d78d5a7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726861048121716956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0391297ee079fe83d6dd82b4755e119f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 32e37a0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013b9d8a05bcd16d3a0f35741e5352ddaa5b9898b742cc7610131cc5949640bc,PodSandboxId:f36ded3b66cf06940a9ba64b5edfa309572f5f9bac248f36bc1d50dca4ee06d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726861048082788951,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fe22eb52a305302132c460b16474bc2,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16b7210b4422289f200e1ac4b007ebabe990747072a9169b54be6feb61c1173,PodSandboxId:5ce211516736ff4fd4017aee55318cfa4c0619e244d4d96720defbab621cb05e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726861048092917057,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a38431de93158b11cc9fdbcf3e7f733a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 525081ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b46c6c9276d05eeb5007760a64b30f1c3a90beba579caec053da4214b8e3856f,PodSandboxId:5cf9e101bd73470748a567212858bbb1623d03561c82e4ac02669779c67b2c6d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726861048067871023,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a706f3e661265518a213052058806114,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b653181-c9c9-449f-ae57-c28b4e0296fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.977078287Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2462a83-bb93-43b8-b04b-01f4b1894f45 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.977147329Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2462a83-bb93-43b8-b04b-01f4b1894f45 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.978163242Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9659f7a4-8b57-4403-9eab-f314700031a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.978858241Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861065978833437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9659f7a4-8b57-4403-9eab-f314700031a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.979695229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97d68c08-68ea-49c3-bb3f-5fbf87d7eda8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.979763775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97d68c08-68ea-49c3-bb3f-5fbf87d7eda8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:37:45 test-preload-232055 crio[670]: time="2024-09-20 19:37:45.979932645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db2de46e8d85048c4604184f0fc87fb4b512b7d3ddd3e29c321007851adbfaab,PodSandboxId:792634a30a926620bedd6309d3874e44a2f6ee3bab2d3b1b91759eae9d1c67df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726861061285880178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hxjzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896e3237-03bb-4ca1-8bf9-62f356f7131a,},Annotations:map[string]string{io.kubernetes.container.hash: 568d774,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d2bfbd8e13779d38d9b1bbf26fe5dccda22b93ee9d1eb43808f307a7a7b9ddd,PodSandboxId:a4c84318bee0b3a144ff7a6432bc5e21526631a78e40474e75d23524dfcefcc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726861054302355803,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5q9tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: c857a9f8-a3e8-422d-bfc4-908564e3c0c3,},Annotations:map[string]string{io.kubernetes.container.hash: b1fe8cb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e72c8731622d7b88e3e9adbf9799c6b3c69d6ac31a2cb9b9271beda2502faf,PodSandboxId:8311d3985d0b3d8ac14f2cc4febb0c9c79c190d8a0417c8e94b5144e36350873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726861054042240843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df0
687ac-fa2e-45b0-951c-af46eeb7b2b6,},Annotations:map[string]string{io.kubernetes.container.hash: 386e0b84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d369e11dfab4fbd0b5537c75647f30622982c6e501e73b9fa576990a85d07034,PodSandboxId:1dacaeb932c6f741a4ef833635b9539b86c9fa9531f8273578177e86d78d5a7f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726861048121716956,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0391297ee079fe83d6dd82b4755e119f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 32e37a0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013b9d8a05bcd16d3a0f35741e5352ddaa5b9898b742cc7610131cc5949640bc,PodSandboxId:f36ded3b66cf06940a9ba64b5edfa309572f5f9bac248f36bc1d50dca4ee06d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726861048082788951,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fe22eb52a305302132c460b16474bc2,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16b7210b4422289f200e1ac4b007ebabe990747072a9169b54be6feb61c1173,PodSandboxId:5ce211516736ff4fd4017aee55318cfa4c0619e244d4d96720defbab621cb05e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726861048092917057,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a38431de93158b11cc9fdbcf3e7f733a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 525081ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b46c6c9276d05eeb5007760a64b30f1c3a90beba579caec053da4214b8e3856f,PodSandboxId:5cf9e101bd73470748a567212858bbb1623d03561c82e4ac02669779c67b2c6d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726861048067871023,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-232055,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a706f3e661265518a213052058806114,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97d68c08-68ea-49c3-bb3f-5fbf87d7eda8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db2de46e8d850       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   4 seconds ago       Running             coredns                   1                   792634a30a926       coredns-6d4b75cb6d-hxjzb
	3d2bfbd8e1377       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   11 seconds ago      Running             kube-proxy                1                   a4c84318bee0b       kube-proxy-5q9tw
	74e72c8731622       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   8311d3985d0b3       storage-provisioner
	d369e11dfab4f       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   17 seconds ago      Running             etcd                      1                   1dacaeb932c6f       etcd-test-preload-232055
	a16b7210b4422       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   17 seconds ago      Running             kube-apiserver            1                   5ce211516736f       kube-apiserver-test-preload-232055
	013b9d8a05bcd       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   17 seconds ago      Running             kube-scheduler            1                   f36ded3b66cf0       kube-scheduler-test-preload-232055
	b46c6c9276d05       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   17 seconds ago      Running             kube-controller-manager   1                   5cf9e101bd734       kube-controller-manager-test-preload-232055
	
	
	==> coredns [db2de46e8d85048c4604184f0fc87fb4b512b7d3ddd3e29c321007851adbfaab] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:40472 - 6480 "HINFO IN 461156113682840003.5511857997976431139. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.030983727s
	
	
	==> describe nodes <==
	Name:               test-preload-232055
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-232055
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=test-preload-232055
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_36_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:36:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-232055
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:37:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:37:42 +0000   Fri, 20 Sep 2024 19:36:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:37:42 +0000   Fri, 20 Sep 2024 19:36:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:37:42 +0000   Fri, 20 Sep 2024 19:36:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:37:42 +0000   Fri, 20 Sep 2024 19:37:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    test-preload-232055
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 990c72ca415f4ead9129f6fc78398c51
	  System UUID:                990c72ca-415f-4ead-9129-f6fc78398c51
	  Boot ID:                    68053e6e-88ef-4373-abe1-e4e5f7249010
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-hxjzb                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     78s
	  kube-system                 etcd-test-preload-232055                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         91s
	  kube-system                 kube-apiserver-test-preload-232055             250m (12%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-controller-manager-test-preload-232055    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-5q9tw                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-test-preload-232055             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  Starting                 76s                kube-proxy       
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  91s                kubelet          Node test-preload-232055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s                kubelet          Node test-preload-232055 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s                kubelet          Node test-preload-232055 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                81s                kubelet          Node test-preload-232055 status is now: NodeReady
	  Normal  RegisteredNode           79s                node-controller  Node test-preload-232055 event: Registered Node test-preload-232055 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node test-preload-232055 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node test-preload-232055 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node test-preload-232055 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node test-preload-232055 event: Registered Node test-preload-232055 in Controller
	
	
	==> dmesg <==
	[Sep20 19:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050821] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039797] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.772488] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep20 19:37] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600810] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.545857] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.060795] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053311] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.170494] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.136149] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.252238] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +12.834828] systemd-fstab-generator[988]: Ignoring "noauto" option for root device
	[  +0.057376] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.625188] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +5.588152] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.545259] systemd-fstab-generator[1760]: Ignoring "noauto" option for root device
	[  +5.834449] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [d369e11dfab4fbd0b5537c75647f30622982c6e501e73b9fa576990a85d07034] <==
	{"level":"info","ts":"2024-09-20T19:37:28.595Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"de9917ec5c740094","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-20T19:37:28.596Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-20T19:37:28.597Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T19:37:28.606Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"de9917ec5c740094","initial-advertise-peer-urls":["https://192.168.39.234:2380"],"listen-peer-urls":["https://192.168.39.234:2380"],"advertise-client-urls":["https://192.168.39.234:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.234:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T19:37:28.599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 switched to configuration voters=(16039877851787559060)"}
	{"level":"info","ts":"2024-09-20T19:37:28.600Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.234:2380"}
	{"level":"info","ts":"2024-09-20T19:37:28.607Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T19:37:28.609Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6193f7f4ee516b71","local-member-id":"de9917ec5c740094","added-peer-id":"de9917ec5c740094","added-peer-peer-urls":["https://192.168.39.234:2380"]}
	{"level":"info","ts":"2024-09-20T19:37:28.609Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6193f7f4ee516b71","local-member-id":"de9917ec5c740094","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:37:28.609Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:37:28.609Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.234:2380"}
	{"level":"info","ts":"2024-09-20T19:37:29.928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T19:37:29.928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T19:37:29.928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 received MsgPreVoteResp from de9917ec5c740094 at term 2"}
	{"level":"info","ts":"2024-09-20T19:37:29.928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T19:37:29.928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 received MsgVoteResp from de9917ec5c740094 at term 3"}
	{"level":"info","ts":"2024-09-20T19:37:29.928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T19:37:29.928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: de9917ec5c740094 elected leader de9917ec5c740094 at term 3"}
	{"level":"info","ts":"2024-09-20T19:37:29.929Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"de9917ec5c740094","local-member-attributes":"{Name:test-preload-232055 ClientURLs:[https://192.168.39.234:2379]}","request-path":"/0/members/de9917ec5c740094/attributes","cluster-id":"6193f7f4ee516b71","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T19:37:29.929Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:37:29.931Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:37:29.931Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:37:29.932Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.234:2379"}
	{"level":"info","ts":"2024-09-20T19:37:29.937Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:37:29.937Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:37:46 up 0 min,  0 users,  load average: 0.78, 0.22, 0.07
	Linux test-preload-232055 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a16b7210b4422289f200e1ac4b007ebabe990747072a9169b54be6feb61c1173] <==
	I0920 19:37:32.320117       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0920 19:37:32.320216       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0920 19:37:32.320296       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0920 19:37:32.327277       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0920 19:37:32.327310       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0920 19:37:32.346623       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 19:37:32.363671       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 19:37:32.427346       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0920 19:37:32.429779       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 19:37:32.432117       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0920 19:37:32.433594       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0920 19:37:32.476309       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0920 19:37:32.492534       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 19:37:32.504961       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0920 19:37:32.506153       1 cache.go:39] Caches are synced for autoregister controller
	I0920 19:37:32.991693       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0920 19:37:33.303272       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 19:37:34.110882       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0920 19:37:34.127901       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0920 19:37:34.170936       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0920 19:37:34.191108       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 19:37:34.196507       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 19:37:34.607341       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0920 19:37:44.921732       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 19:37:44.989184       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b46c6c9276d05eeb5007760a64b30f1c3a90beba579caec053da4214b8e3856f] <==
	I0920 19:37:44.902454       1 shared_informer.go:262] Caches are synced for daemon sets
	I0920 19:37:44.907938       1 shared_informer.go:262] Caches are synced for TTL
	I0920 19:37:44.909103       1 shared_informer.go:262] Caches are synced for taint
	I0920 19:37:44.909248       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0920 19:37:44.909407       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-232055. Assuming now as a timestamp.
	I0920 19:37:44.909461       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0920 19:37:44.909702       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0920 19:37:44.910503       1 event.go:294] "Event occurred" object="test-preload-232055" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-232055 event: Registered Node test-preload-232055 in Controller"
	I0920 19:37:44.913664       1 shared_informer.go:262] Caches are synced for persistent volume
	I0920 19:37:44.913820       1 shared_informer.go:262] Caches are synced for namespace
	I0920 19:37:44.936715       1 shared_informer.go:262] Caches are synced for attach detach
	I0920 19:37:44.970109       1 shared_informer.go:262] Caches are synced for node
	I0920 19:37:44.970229       1 range_allocator.go:173] Starting range CIDR allocator
	I0920 19:37:44.970257       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0920 19:37:44.970270       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0920 19:37:44.976835       1 shared_informer.go:262] Caches are synced for endpoint
	I0920 19:37:45.022592       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0920 19:37:45.064202       1 shared_informer.go:262] Caches are synced for resource quota
	I0920 19:37:45.080528       1 shared_informer.go:262] Caches are synced for stateful set
	I0920 19:37:45.085908       1 shared_informer.go:262] Caches are synced for resource quota
	I0920 19:37:45.089440       1 shared_informer.go:262] Caches are synced for disruption
	I0920 19:37:45.089476       1 disruption.go:371] Sending events to api server.
	I0920 19:37:45.520912       1 shared_informer.go:262] Caches are synced for garbage collector
	I0920 19:37:45.520932       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0920 19:37:45.529658       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [3d2bfbd8e13779d38d9b1bbf26fe5dccda22b93ee9d1eb43808f307a7a7b9ddd] <==
	I0920 19:37:34.520213       1 node.go:163] Successfully retrieved node IP: 192.168.39.234
	I0920 19:37:34.520766       1 server_others.go:138] "Detected node IP" address="192.168.39.234"
	I0920 19:37:34.520864       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0920 19:37:34.578670       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0920 19:37:34.578841       1 server_others.go:206] "Using iptables Proxier"
	I0920 19:37:34.579901       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0920 19:37:34.582232       1 server.go:661] "Version info" version="v1.24.4"
	I0920 19:37:34.582585       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:37:34.590695       1 config.go:317] "Starting service config controller"
	I0920 19:37:34.595650       1 config.go:226] "Starting endpoint slice config controller"
	I0920 19:37:34.602865       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0920 19:37:34.596151       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0920 19:37:34.598919       1 config.go:444] "Starting node config controller"
	I0920 19:37:34.612861       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0920 19:37:34.703501       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0920 19:37:34.712808       1 shared_informer.go:262] Caches are synced for service config
	I0920 19:37:34.713113       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [013b9d8a05bcd16d3a0f35741e5352ddaa5b9898b742cc7610131cc5949640bc] <==
	I0920 19:37:29.121637       1 serving.go:348] Generated self-signed cert in-memory
	W0920 19:37:32.354973       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 19:37:32.355222       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 19:37:32.355267       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 19:37:32.355292       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 19:37:32.429184       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0920 19:37:32.429218       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:37:32.440424       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0920 19:37:32.440630       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 19:37:32.440670       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 19:37:32.440709       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0920 19:37:32.541686       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:37:32 test-preload-232055 kubelet[1124]: I0920 19:37:32.476006    1124 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-232055"
	Sep 20 19:37:32 test-preload-232055 kubelet[1124]: I0920 19:37:32.484109    1124 setters.go:532] "Node became not ready" node="test-preload-232055" condition={Type:Ready Status:False LastHeartbeatTime:2024-09-20 19:37:32.483973165 +0000 UTC m=+5.305083014 LastTransitionTime:2024-09-20 19:37:32.483973165 +0000 UTC m=+5.305083014 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.299190    1124 apiserver.go:52] "Watching apiserver"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.304873    1124 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.304984    1124 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.305021    1124 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: E0920 19:37:33.305274    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-hxjzb" podUID=896e3237-03bb-4ca1-8bf9-62f356f7131a
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.371699    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8pq\" (UniqueName: \"kubernetes.io/projected/896e3237-03bb-4ca1-8bf9-62f356f7131a-kube-api-access-rd8pq\") pod \"coredns-6d4b75cb6d-hxjzb\" (UID: \"896e3237-03bb-4ca1-8bf9-62f356f7131a\") " pod="kube-system/coredns-6d4b75cb6d-hxjzb"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.372196    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c857a9f8-a3e8-422d-bfc4-908564e3c0c3-kube-proxy\") pod \"kube-proxy-5q9tw\" (UID: \"c857a9f8-a3e8-422d-bfc4-908564e3c0c3\") " pod="kube-system/kube-proxy-5q9tw"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.372351    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c857a9f8-a3e8-422d-bfc4-908564e3c0c3-lib-modules\") pod \"kube-proxy-5q9tw\" (UID: \"c857a9f8-a3e8-422d-bfc4-908564e3c0c3\") " pod="kube-system/kube-proxy-5q9tw"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.372506    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/df0687ac-fa2e-45b0-951c-af46eeb7b2b6-tmp\") pod \"storage-provisioner\" (UID: \"df0687ac-fa2e-45b0-951c-af46eeb7b2b6\") " pod="kube-system/storage-provisioner"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.372639    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c857a9f8-a3e8-422d-bfc4-908564e3c0c3-xtables-lock\") pod \"kube-proxy-5q9tw\" (UID: \"c857a9f8-a3e8-422d-bfc4-908564e3c0c3\") " pod="kube-system/kube-proxy-5q9tw"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.372765    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wnr5\" (UniqueName: \"kubernetes.io/projected/df0687ac-fa2e-45b0-951c-af46eeb7b2b6-kube-api-access-4wnr5\") pod \"storage-provisioner\" (UID: \"df0687ac-fa2e-45b0-951c-af46eeb7b2b6\") " pod="kube-system/storage-provisioner"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.372895    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/896e3237-03bb-4ca1-8bf9-62f356f7131a-config-volume\") pod \"coredns-6d4b75cb6d-hxjzb\" (UID: \"896e3237-03bb-4ca1-8bf9-62f356f7131a\") " pod="kube-system/coredns-6d4b75cb6d-hxjzb"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.373018    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nwpv\" (UniqueName: \"kubernetes.io/projected/c857a9f8-a3e8-422d-bfc4-908564e3c0c3-kube-api-access-7nwpv\") pod \"kube-proxy-5q9tw\" (UID: \"c857a9f8-a3e8-422d-bfc4-908564e3c0c3\") " pod="kube-system/kube-proxy-5q9tw"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: I0920 19:37:33.373119    1124 reconciler.go:159] "Reconciler: start to sync state"
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: E0920 19:37:33.477285    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: E0920 19:37:33.477406    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/896e3237-03bb-4ca1-8bf9-62f356f7131a-config-volume podName:896e3237-03bb-4ca1-8bf9-62f356f7131a nodeName:}" failed. No retries permitted until 2024-09-20 19:37:33.977337373 +0000 UTC m=+6.798447233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/896e3237-03bb-4ca1-8bf9-62f356f7131a-config-volume") pod "coredns-6d4b75cb6d-hxjzb" (UID: "896e3237-03bb-4ca1-8bf9-62f356f7131a") : object "kube-system"/"coredns" not registered
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: E0920 19:37:33.981754    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 20 19:37:33 test-preload-232055 kubelet[1124]: E0920 19:37:33.981821    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/896e3237-03bb-4ca1-8bf9-62f356f7131a-config-volume podName:896e3237-03bb-4ca1-8bf9-62f356f7131a nodeName:}" failed. No retries permitted until 2024-09-20 19:37:34.981804067 +0000 UTC m=+7.802913915 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/896e3237-03bb-4ca1-8bf9-62f356f7131a-config-volume") pod "coredns-6d4b75cb6d-hxjzb" (UID: "896e3237-03bb-4ca1-8bf9-62f356f7131a") : object "kube-system"/"coredns" not registered
	Sep 20 19:37:34 test-preload-232055 kubelet[1124]: E0920 19:37:34.988601    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 20 19:37:34 test-preload-232055 kubelet[1124]: E0920 19:37:34.988689    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/896e3237-03bb-4ca1-8bf9-62f356f7131a-config-volume podName:896e3237-03bb-4ca1-8bf9-62f356f7131a nodeName:}" failed. No retries permitted until 2024-09-20 19:37:36.988675214 +0000 UTC m=+9.809785074 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/896e3237-03bb-4ca1-8bf9-62f356f7131a-config-volume") pod "coredns-6d4b75cb6d-hxjzb" (UID: "896e3237-03bb-4ca1-8bf9-62f356f7131a") : object "kube-system"/"coredns" not registered
	Sep 20 19:37:35 test-preload-232055 kubelet[1124]: E0920 19:37:35.439682    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-hxjzb" podUID=896e3237-03bb-4ca1-8bf9-62f356f7131a
	Sep 20 19:37:37 test-preload-232055 kubelet[1124]: E0920 19:37:37.003634    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 20 19:37:37 test-preload-232055 kubelet[1124]: E0920 19:37:37.003758    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/896e3237-03bb-4ca1-8bf9-62f356f7131a-config-volume podName:896e3237-03bb-4ca1-8bf9-62f356f7131a nodeName:}" failed. No retries permitted until 2024-09-20 19:37:41.003734631 +0000 UTC m=+13.824844493 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/896e3237-03bb-4ca1-8bf9-62f356f7131a-config-volume") pod "coredns-6d4b75cb6d-hxjzb" (UID: "896e3237-03bb-4ca1-8bf9-62f356f7131a") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [74e72c8731622d7b88e3e9adbf9799c6b3c69d6ac31a2cb9b9271beda2502faf] <==
	I0920 19:37:34.173614       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-232055 -n test-preload-232055
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-232055 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-232055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-232055
--- FAIL: TestPreload (166.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (388.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-220027 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-220027 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m13.563547764s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-220027] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-220027" primary control-plane node in "kubernetes-upgrade-220027" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:42:35.011443  792944 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:42:35.011706  792944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:42:35.011715  792944 out.go:358] Setting ErrFile to fd 2...
	I0920 19:42:35.011720  792944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:42:35.011907  792944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 19:42:35.012486  792944 out.go:352] Setting JSON to false
	I0920 19:42:35.013460  792944 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12305,"bootTime":1726849050,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:42:35.013563  792944 start.go:139] virtualization: kvm guest
	I0920 19:42:35.015720  792944 out.go:177] * [kubernetes-upgrade-220027] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:42:35.016966  792944 notify.go:220] Checking for updates...
	I0920 19:42:35.016990  792944 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:42:35.018304  792944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:42:35.019668  792944 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 19:42:35.021039  792944 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 19:42:35.022223  792944 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:42:35.023544  792944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:42:35.025188  792944 config.go:182] Loaded profile config "NoKubernetes-677486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0920 19:42:35.025275  792944 config.go:182] Loaded profile config "cert-expiration-741208": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:42:35.025384  792944 config.go:182] Loaded profile config "running-upgrade-666227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0920 19:42:35.025485  792944 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:42:35.062316  792944 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 19:42:35.063792  792944 start.go:297] selected driver: kvm2
	I0920 19:42:35.063809  792944 start.go:901] validating driver "kvm2" against <nil>
	I0920 19:42:35.063821  792944 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:42:35.064578  792944 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:42:35.064684  792944 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:42:35.080283  792944 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:42:35.080328  792944 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:42:35.080561  792944 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 19:42:35.080591  792944 cni.go:84] Creating CNI manager for ""
	I0920 19:42:35.080647  792944 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:42:35.080657  792944 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 19:42:35.080714  792944 start.go:340] cluster config:
	{Name:kubernetes-upgrade-220027 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-220027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:42:35.080833  792944 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:42:35.082607  792944 out.go:177] * Starting "kubernetes-upgrade-220027" primary control-plane node in "kubernetes-upgrade-220027" cluster
	I0920 19:42:35.084030  792944 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:42:35.084080  792944 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 19:42:35.084103  792944 cache.go:56] Caching tarball of preloaded images
	I0920 19:42:35.084200  792944 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:42:35.084214  792944 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 19:42:35.084332  792944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/config.json ...
	I0920 19:42:35.084360  792944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/config.json: {Name:mk04f29046a36239f2d7ee1ed0b25642e3c863e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:42:35.084523  792944 start.go:360] acquireMachinesLock for kubernetes-upgrade-220027: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:43:16.019209  792944 start.go:364] duration metric: took 40.93464935s to acquireMachinesLock for "kubernetes-upgrade-220027"
	I0920 19:43:16.019270  792944 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-220027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-220027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:43:16.019390  792944 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 19:43:16.021526  792944 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 19:43:16.021718  792944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:43:16.021777  792944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:43:16.038505  792944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40327
	I0920 19:43:16.038965  792944 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:43:16.039603  792944 main.go:141] libmachine: Using API Version  1
	I0920 19:43:16.039652  792944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:43:16.039968  792944 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:43:16.040162  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetMachineName
	I0920 19:43:16.040310  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:43:16.040467  792944 start.go:159] libmachine.API.Create for "kubernetes-upgrade-220027" (driver="kvm2")
	I0920 19:43:16.040496  792944 client.go:168] LocalClient.Create starting
	I0920 19:43:16.040542  792944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
	I0920 19:43:16.040579  792944 main.go:141] libmachine: Decoding PEM data...
	I0920 19:43:16.040598  792944 main.go:141] libmachine: Parsing certificate...
	I0920 19:43:16.040650  792944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
	I0920 19:43:16.040670  792944 main.go:141] libmachine: Decoding PEM data...
	I0920 19:43:16.040678  792944 main.go:141] libmachine: Parsing certificate...
	I0920 19:43:16.040693  792944 main.go:141] libmachine: Running pre-create checks...
	I0920 19:43:16.040699  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .PreCreateCheck
	I0920 19:43:16.041036  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetConfigRaw
	I0920 19:43:16.041457  792944 main.go:141] libmachine: Creating machine...
	I0920 19:43:16.041479  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .Create
	I0920 19:43:16.041610  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Creating KVM machine...
	I0920 19:43:16.042815  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found existing default KVM network
	I0920 19:43:16.043938  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:16.043787  793563 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5c:65:a8} reservation:<nil>}
	I0920 19:43:16.044789  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:16.044679  793563 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:0a:c4:55} reservation:<nil>}
	I0920 19:43:16.045592  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:16.045512  793563 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:75:9e:7f} reservation:<nil>}
	I0920 19:43:16.046575  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:16.046496  793563 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027f890}
	I0920 19:43:16.046638  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | created network xml: 
	I0920 19:43:16.046659  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | <network>
	I0920 19:43:16.046673  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG |   <name>mk-kubernetes-upgrade-220027</name>
	I0920 19:43:16.046690  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG |   <dns enable='no'/>
	I0920 19:43:16.046701  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG |   
	I0920 19:43:16.046715  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0920 19:43:16.046729  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG |     <dhcp>
	I0920 19:43:16.046742  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0920 19:43:16.046754  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG |     </dhcp>
	I0920 19:43:16.046765  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG |   </ip>
	I0920 19:43:16.046777  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG |   
	I0920 19:43:16.046796  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | </network>
	I0920 19:43:16.046809  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | 
	I0920 19:43:16.052450  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | trying to create private KVM network mk-kubernetes-upgrade-220027 192.168.72.0/24...
	I0920 19:43:16.123848  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | private KVM network mk-kubernetes-upgrade-220027 192.168.72.0/24 created
	I0920 19:43:16.123913  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:16.123793  793563 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 19:43:16.123929  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027 ...
	I0920 19:43:16.123947  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 19:43:16.123960  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 19:43:16.383025  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:16.382838  793563 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/id_rsa...
	I0920 19:43:16.631027  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:16.630808  793563 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/kubernetes-upgrade-220027.rawdisk...
	I0920 19:43:16.631063  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Writing magic tar header
	I0920 19:43:16.631085  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Writing SSH key tar header
	I0920 19:43:16.631098  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:16.630976  793563 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027 ...
	I0920 19:43:16.631120  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027
	I0920 19:43:16.631134  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
	I0920 19:43:16.631148  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 19:43:16.631164  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027 (perms=drwx------)
	I0920 19:43:16.631187  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
	I0920 19:43:16.631202  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
	I0920 19:43:16.631221  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
	I0920 19:43:16.631234  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
	I0920 19:43:16.631247  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 19:43:16.631257  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 19:43:16.631267  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 19:43:16.631279  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Checking permissions on dir: /home/jenkins
	I0920 19:43:16.631290  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Creating domain...
	I0920 19:43:16.631303  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Checking permissions on dir: /home
	I0920 19:43:16.631314  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Skipping /home - not owner
	I0920 19:43:16.632500  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) define libvirt domain using xml: 
	I0920 19:43:16.632546  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) <domain type='kvm'>
	I0920 19:43:16.632559  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)   <name>kubernetes-upgrade-220027</name>
	I0920 19:43:16.632565  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)   <memory unit='MiB'>2200</memory>
	I0920 19:43:16.632570  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)   <vcpu>2</vcpu>
	I0920 19:43:16.632577  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)   <features>
	I0920 19:43:16.632584  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <acpi/>
	I0920 19:43:16.632591  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <apic/>
	I0920 19:43:16.632619  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <pae/>
	I0920 19:43:16.632641  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     
	I0920 19:43:16.632651  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)   </features>
	I0920 19:43:16.632666  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)   <cpu mode='host-passthrough'>
	I0920 19:43:16.632674  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)   
	I0920 19:43:16.632684  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)   </cpu>
	I0920 19:43:16.632693  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)   <os>
	I0920 19:43:16.632714  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <type>hvm</type>
	I0920 19:43:16.632722  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <boot dev='cdrom'/>
	I0920 19:43:16.632726  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <boot dev='hd'/>
	I0920 19:43:16.632732  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <bootmenu enable='no'/>
	I0920 19:43:16.632738  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)   </os>
	I0920 19:43:16.632769  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)   <devices>
	I0920 19:43:16.632793  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <disk type='file' device='cdrom'>
	I0920 19:43:16.632818  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/boot2docker.iso'/>
	I0920 19:43:16.632843  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <target dev='hdc' bus='scsi'/>
	I0920 19:43:16.632857  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <readonly/>
	I0920 19:43:16.632869  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     </disk>
	I0920 19:43:16.632883  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <disk type='file' device='disk'>
	I0920 19:43:16.632895  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 19:43:16.632912  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/kubernetes-upgrade-220027.rawdisk'/>
	I0920 19:43:16.632922  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <target dev='hda' bus='virtio'/>
	I0920 19:43:16.632930  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     </disk>
	I0920 19:43:16.632939  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <interface type='network'>
	I0920 19:43:16.632951  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <source network='mk-kubernetes-upgrade-220027'/>
	I0920 19:43:16.632964  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <model type='virtio'/>
	I0920 19:43:16.632975  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     </interface>
	I0920 19:43:16.632997  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <interface type='network'>
	I0920 19:43:16.633030  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <source network='default'/>
	I0920 19:43:16.633044  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <model type='virtio'/>
	I0920 19:43:16.633054  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     </interface>
	I0920 19:43:16.633065  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <serial type='pty'>
	I0920 19:43:16.633075  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <target port='0'/>
	I0920 19:43:16.633086  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     </serial>
	I0920 19:43:16.633093  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <console type='pty'>
	I0920 19:43:16.633129  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <target type='serial' port='0'/>
	I0920 19:43:16.633152  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     </console>
	I0920 19:43:16.633165  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     <rng model='virtio'>
	I0920 19:43:16.633176  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)       <backend model='random'>/dev/random</backend>
	I0920 19:43:16.633185  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     </rng>
	I0920 19:43:16.633194  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     
	I0920 19:43:16.633201  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)     
	I0920 19:43:16.633211  792944 main.go:141] libmachine: (kubernetes-upgrade-220027)   </devices>
	I0920 19:43:16.633219  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) </domain>
	I0920 19:43:16.633228  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) 
	I0920 19:43:16.637227  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:3e:85:78 in network default
	I0920 19:43:16.637891  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Ensuring networks are active...
	I0920 19:43:16.637917  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:16.638568  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Ensuring network default is active
	I0920 19:43:16.638989  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Ensuring network mk-kubernetes-upgrade-220027 is active
	I0920 19:43:16.639606  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Getting domain xml...
	I0920 19:43:16.640460  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Creating domain...
	I0920 19:43:17.865620  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Waiting to get IP...
	I0920 19:43:17.866374  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:17.866784  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:17.866808  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:17.866764  793563 retry.go:31] will retry after 297.667975ms: waiting for machine to come up
	I0920 19:43:18.166821  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:18.167346  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:18.167375  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:18.167303  793563 retry.go:31] will retry after 348.839884ms: waiting for machine to come up
	I0920 19:43:18.518043  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:18.518483  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:18.518505  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:18.518440  793563 retry.go:31] will retry after 438.536764ms: waiting for machine to come up
	I0920 19:43:18.959265  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:18.959793  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:18.959817  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:18.959745  793563 retry.go:31] will retry after 558.327138ms: waiting for machine to come up
	I0920 19:43:19.519635  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:19.520043  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:19.520061  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:19.520006  793563 retry.go:31] will retry after 572.116171ms: waiting for machine to come up
	I0920 19:43:20.093446  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:20.093964  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:20.094013  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:20.093912  793563 retry.go:31] will retry after 776.805713ms: waiting for machine to come up
	I0920 19:43:20.872362  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:20.872966  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:20.872994  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:20.872929  793563 retry.go:31] will retry after 835.672535ms: waiting for machine to come up
	I0920 19:43:21.710249  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:21.710686  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:21.710735  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:21.710645  793563 retry.go:31] will retry after 1.269884555s: waiting for machine to come up
	I0920 19:43:22.982225  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:22.982694  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:22.982728  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:22.982623  793563 retry.go:31] will retry after 1.70278689s: waiting for machine to come up
	I0920 19:43:24.687723  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:24.688284  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:24.688316  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:24.688210  793563 retry.go:31] will retry after 1.455246353s: waiting for machine to come up
	I0920 19:43:26.145542  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:26.146058  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:26.146090  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:26.145976  793563 retry.go:31] will retry after 2.53035455s: waiting for machine to come up
	I0920 19:43:28.677732  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:28.678283  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:28.678312  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:28.678230  793563 retry.go:31] will retry after 2.485164807s: waiting for machine to come up
	I0920 19:43:31.164923  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:31.165360  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:31.165394  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:31.165326  793563 retry.go:31] will retry after 4.462561889s: waiting for machine to come up
	I0920 19:43:35.632137  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:35.632634  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find current IP address of domain kubernetes-upgrade-220027 in network mk-kubernetes-upgrade-220027
	I0920 19:43:35.632665  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | I0920 19:43:35.632580  793563 retry.go:31] will retry after 4.864781107s: waiting for machine to come up
	I0920 19:43:40.500675  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:40.501198  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Found IP for machine: 192.168.72.238
	I0920 19:43:40.501226  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Reserving static IP address...
	I0920 19:43:40.501242  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has current primary IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:40.501666  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-220027", mac: "52:54:00:09:b6:97", ip: "192.168.72.238"} in network mk-kubernetes-upgrade-220027
	I0920 19:43:40.584441  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Reserved static IP address: 192.168.72.238
	I0920 19:43:40.584470  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Getting to WaitForSSH function...
	I0920 19:43:40.584499  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Waiting for SSH to be available...
	I0920 19:43:40.587644  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:40.587986  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:minikube Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:40.588010  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:40.588166  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Using SSH client type: external
	I0920 19:43:40.588190  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/id_rsa (-rw-------)
	I0920 19:43:40.588260  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 19:43:40.588285  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | About to run SSH command:
	I0920 19:43:40.588323  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | exit 0
	I0920 19:43:40.723106  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | SSH cmd err, output: <nil>: 
	I0920 19:43:40.723383  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) KVM machine creation complete!
	I0920 19:43:40.723721  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetConfigRaw
	I0920 19:43:40.724336  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:43:40.724528  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:43:40.724713  792944 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 19:43:40.724738  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetState
	I0920 19:43:40.726108  792944 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 19:43:40.726123  792944 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 19:43:40.726131  792944 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 19:43:40.726139  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:43:40.728741  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:40.729165  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:40.729197  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:40.729313  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:43:40.729486  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:40.729620  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:40.729782  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:43:40.729956  792944 main.go:141] libmachine: Using SSH client type: native
	I0920 19:43:40.730186  792944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0920 19:43:40.730199  792944 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 19:43:40.834264  792944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:43:40.834345  792944 main.go:141] libmachine: Detecting the provisioner...
	I0920 19:43:40.834360  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:43:40.837850  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:40.838249  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:40.838287  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:40.838540  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:43:40.838760  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:40.838970  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:40.839165  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:43:40.839395  792944 main.go:141] libmachine: Using SSH client type: native
	I0920 19:43:40.839634  792944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0920 19:43:40.839648  792944 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 19:43:40.951620  792944 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 19:43:40.951730  792944 main.go:141] libmachine: found compatible host: buildroot
	I0920 19:43:40.951747  792944 main.go:141] libmachine: Provisioning with buildroot...
	I0920 19:43:40.951761  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetMachineName
	I0920 19:43:40.952030  792944 buildroot.go:166] provisioning hostname "kubernetes-upgrade-220027"
	I0920 19:43:40.952064  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetMachineName
	I0920 19:43:40.952292  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:43:40.955053  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:40.955428  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:40.955468  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:40.955638  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:43:40.955836  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:40.956013  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:40.956158  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:43:40.956347  792944 main.go:141] libmachine: Using SSH client type: native
	I0920 19:43:40.956575  792944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0920 19:43:40.956596  792944 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-220027 && echo "kubernetes-upgrade-220027" | sudo tee /etc/hostname
	I0920 19:43:41.090951  792944 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-220027
	
	I0920 19:43:41.090990  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:43:41.094007  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.094436  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:41.094482  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.094815  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:43:41.095032  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:41.095214  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:41.095392  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:43:41.095557  792944 main.go:141] libmachine: Using SSH client type: native
	I0920 19:43:41.095783  792944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0920 19:43:41.095808  792944 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-220027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-220027/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-220027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:43:41.219767  792944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:43:41.219809  792944 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 19:43:41.219869  792944 buildroot.go:174] setting up certificates
	I0920 19:43:41.219895  792944 provision.go:84] configureAuth start
	I0920 19:43:41.219919  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetMachineName
	I0920 19:43:41.220225  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetIP
	I0920 19:43:41.222905  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.223360  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:41.223430  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.223509  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:43:41.225801  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.226239  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:41.226282  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.226443  792944 provision.go:143] copyHostCerts
	I0920 19:43:41.226509  792944 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 19:43:41.226535  792944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 19:43:41.226602  792944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 19:43:41.226702  792944 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 19:43:41.226710  792944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 19:43:41.226731  792944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 19:43:41.226781  792944 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 19:43:41.226789  792944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 19:43:41.226806  792944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 19:43:41.226872  792944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-220027 san=[127.0.0.1 192.168.72.238 kubernetes-upgrade-220027 localhost minikube]
	I0920 19:43:41.542426  792944 provision.go:177] copyRemoteCerts
	I0920 19:43:41.542507  792944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:43:41.542536  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:43:41.545400  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.545848  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:41.545893  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.546037  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:43:41.546284  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:41.546483  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:43:41.546646  792944 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/id_rsa Username:docker}
	I0920 19:43:41.629636  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:43:41.656781  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0920 19:43:41.688311  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:43:41.715943  792944 provision.go:87] duration metric: took 496.020952ms to configureAuth
	I0920 19:43:41.715979  792944 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:43:41.716174  792944 config.go:182] Loaded profile config "kubernetes-upgrade-220027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:43:41.716280  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:43:41.719702  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.720104  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:41.720140  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.720368  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:43:41.720624  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:41.720837  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:41.721040  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:43:41.721204  792944 main.go:141] libmachine: Using SSH client type: native
	I0920 19:43:41.721432  792944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0920 19:43:41.721455  792944 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:43:41.970992  792944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:43:41.971033  792944 main.go:141] libmachine: Checking connection to Docker...
	I0920 19:43:41.971046  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetURL
	I0920 19:43:41.972289  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | Using libvirt version 6000000
	I0920 19:43:41.974644  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.975005  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:41.975033  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.975264  792944 main.go:141] libmachine: Docker is up and running!
	I0920 19:43:41.975280  792944 main.go:141] libmachine: Reticulating splines...
	I0920 19:43:41.975289  792944 client.go:171] duration metric: took 25.934784228s to LocalClient.Create
	I0920 19:43:41.975313  792944 start.go:167] duration metric: took 25.934847754s to libmachine.API.Create "kubernetes-upgrade-220027"
	I0920 19:43:41.975326  792944 start.go:293] postStartSetup for "kubernetes-upgrade-220027" (driver="kvm2")
	I0920 19:43:41.975335  792944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:43:41.975355  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:43:41.975583  792944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:43:41.975610  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:43:41.977756  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.978073  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:41.978098  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:41.978202  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:43:41.978380  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:41.978517  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:43:41.978676  792944 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/id_rsa Username:docker}
	I0920 19:43:42.061017  792944 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:43:42.065384  792944 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:43:42.065421  792944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 19:43:42.065487  792944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 19:43:42.065564  792944 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 19:43:42.065673  792944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:43:42.074969  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 19:43:42.099887  792944 start.go:296] duration metric: took 124.54332ms for postStartSetup
	I0920 19:43:42.099944  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetConfigRaw
	I0920 19:43:42.100611  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetIP
	I0920 19:43:42.103627  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:42.104016  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:42.104050  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:42.104255  792944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/config.json ...
	I0920 19:43:42.104450  792944 start.go:128] duration metric: took 26.085043276s to createHost
	I0920 19:43:42.104478  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:43:42.106686  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:42.106983  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:42.107016  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:42.107149  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:43:42.107333  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:42.107494  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:42.107608  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:43:42.107792  792944 main.go:141] libmachine: Using SSH client type: native
	I0920 19:43:42.107977  792944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0920 19:43:42.107987  792944 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:43:42.215587  792944 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726861422.172639258
	
	I0920 19:43:42.215626  792944 fix.go:216] guest clock: 1726861422.172639258
	I0920 19:43:42.215637  792944 fix.go:229] Guest: 2024-09-20 19:43:42.172639258 +0000 UTC Remote: 2024-09-20 19:43:42.104462759 +0000 UTC m=+67.130608265 (delta=68.176499ms)
	I0920 19:43:42.215705  792944 fix.go:200] guest clock delta is within tolerance: 68.176499ms
	I0920 19:43:42.215712  792944 start.go:83] releasing machines lock for "kubernetes-upgrade-220027", held for 26.196463953s
	I0920 19:43:42.215750  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:43:42.216047  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetIP
	I0920 19:43:42.218733  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:42.219113  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:42.219139  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:42.219266  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:43:42.219765  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:43:42.219962  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:43:42.220060  792944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:43:42.220114  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:43:42.220156  792944 ssh_runner.go:195] Run: cat /version.json
	I0920 19:43:42.220177  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:43:42.222776  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:42.223075  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:42.223163  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:42.223195  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:42.223403  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:42.223445  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:42.223481  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:43:42.223605  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:43:42.223703  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:42.223746  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:43:42.223900  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:43:42.223901  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:43:42.224037  792944 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/id_rsa Username:docker}
	I0920 19:43:42.224071  792944 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/id_rsa Username:docker}
	I0920 19:43:42.328653  792944 ssh_runner.go:195] Run: systemctl --version
	I0920 19:43:42.335293  792944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:43:42.492134  792944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:43:42.498998  792944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:43:42.499081  792944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:43:42.517132  792944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 19:43:42.517162  792944 start.go:495] detecting cgroup driver to use...
	I0920 19:43:42.517243  792944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:43:42.537986  792944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:43:42.552768  792944 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:43:42.552820  792944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:43:42.567594  792944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:43:42.581429  792944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:43:42.698244  792944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:43:42.854753  792944 docker.go:233] disabling docker service ...
	I0920 19:43:42.854858  792944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:43:42.874156  792944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:43:42.891872  792944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:43:43.030831  792944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:43:43.157564  792944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:43:43.171891  792944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:43:43.192366  792944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 19:43:43.192432  792944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:43:43.207368  792944 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:43:43.207430  792944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:43:43.222449  792944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:43:43.238163  792944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:43:43.252840  792944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:43:43.268575  792944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:43:43.278485  792944 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 19:43:43.278554  792944 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 19:43:43.291379  792944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:43:43.303111  792944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:43:43.415684  792944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:43:43.525117  792944 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:43:43.525204  792944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:43:43.531245  792944 start.go:563] Will wait 60s for crictl version
	I0920 19:43:43.531304  792944 ssh_runner.go:195] Run: which crictl
	I0920 19:43:43.535989  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:43:43.576197  792944 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:43:43.576292  792944 ssh_runner.go:195] Run: crio --version
	I0920 19:43:43.613696  792944 ssh_runner.go:195] Run: crio --version
	I0920 19:43:43.644261  792944 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 19:43:43.645607  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetIP
	I0920 19:43:43.648772  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:43.649153  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:43:31 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:43:43.649190  792944 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:43:43.649414  792944 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 19:43:43.653623  792944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:43:43.668212  792944 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-220027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-220027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:43:43.668351  792944 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:43:43.668416  792944 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:43:43.704994  792944 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:43:43.705087  792944 ssh_runner.go:195] Run: which lz4
	I0920 19:43:43.709587  792944 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 19:43:43.713926  792944 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 19:43:43.713964  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 19:43:45.407317  792944 crio.go:462] duration metric: took 1.697778618s to copy over tarball
	I0920 19:43:45.407405  792944 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 19:43:48.075733  792944 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.668281031s)
	I0920 19:43:48.075768  792944 crio.go:469] duration metric: took 2.668420065s to extract the tarball
	I0920 19:43:48.075810  792944 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 19:43:48.132862  792944 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:43:48.186151  792944 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 19:43:48.186178  792944 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 19:43:48.186276  792944 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:43:48.186309  792944 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:43:48.186309  792944 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 19:43:48.186320  792944 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:43:48.186306  792944 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:43:48.186364  792944 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 19:43:48.186287  792944 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:43:48.186287  792944 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:43:48.188248  792944 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:43:48.188275  792944 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 19:43:48.188278  792944 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:43:48.188252  792944 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:43:48.188308  792944 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 19:43:48.188274  792944 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:43:48.188251  792944 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:43:48.188254  792944 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:43:48.353356  792944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 19:43:48.366031  792944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 19:43:48.368979  792944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:43:48.374820  792944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 19:43:48.389348  792944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:43:48.390888  792944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:43:48.396804  792944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:43:48.440729  792944 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 19:43:48.440776  792944 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 19:43:48.440835  792944 ssh_runner.go:195] Run: which crictl
	I0920 19:43:48.529155  792944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:43:48.529472  792944 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 19:43:48.529525  792944 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:43:48.529572  792944 ssh_runner.go:195] Run: which crictl
	I0920 19:43:48.529774  792944 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 19:43:48.529819  792944 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 19:43:48.529885  792944 ssh_runner.go:195] Run: which crictl
	I0920 19:43:48.532905  792944 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 19:43:48.532942  792944 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 19:43:48.532979  792944 ssh_runner.go:195] Run: which crictl
	I0920 19:43:48.561605  792944 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 19:43:48.561655  792944 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:43:48.561689  792944 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 19:43:48.561731  792944 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:43:48.561773  792944 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 19:43:48.561783  792944 ssh_runner.go:195] Run: which crictl
	I0920 19:43:48.561700  792944 ssh_runner.go:195] Run: which crictl
	I0920 19:43:48.561797  792944 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:43:48.561861  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:43:48.561881  792944 ssh_runner.go:195] Run: which crictl
	I0920 19:43:48.719813  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:43:48.719854  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:43:48.719906  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:43:48.719975  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:43:48.720037  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:43:48.720038  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:43:48.720119  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:43:48.877116  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:43:48.877139  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:43:48.882461  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 19:43:48.882521  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:43:48.882536  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:43:48.884724  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:43:48.884899  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:43:49.009440  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 19:43:49.009487  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 19:43:49.051507  792944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 19:43:49.051616  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 19:43:49.051669  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 19:43:49.051784  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 19:43:49.051839  792944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 19:43:49.136182  792944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 19:43:49.136182  792944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 19:43:49.176937  792944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 19:43:49.176983  792944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 19:43:49.176991  792944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 19:43:49.176937  792944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 19:43:49.177066  792944 cache_images.go:92] duration metric: took 990.853529ms to LoadCachedImages
	W0920 19:43:49.177132  792944 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19678-739831/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0920 19:43:49.177158  792944 kubeadm.go:934] updating node { 192.168.72.238 8443 v1.20.0 crio true true} ...
	I0920 19:43:49.177267  792944 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-220027 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-220027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:43:49.177362  792944 ssh_runner.go:195] Run: crio config
	I0920 19:43:49.228888  792944 cni.go:84] Creating CNI manager for ""
	I0920 19:43:49.228917  792944 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:43:49.228928  792944 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:43:49.228948  792944 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.238 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-220027 NodeName:kubernetes-upgrade-220027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 19:43:49.229080  792944 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-220027"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:43:49.229142  792944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 19:43:49.240066  792944 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:43:49.240149  792944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:43:49.250018  792944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0920 19:43:49.269540  792944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:43:49.288172  792944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0920 19:43:49.305371  792944 ssh_runner.go:195] Run: grep 192.168.72.238	control-plane.minikube.internal$ /etc/hosts
	I0920 19:43:49.310199  792944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:43:49.323749  792944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:43:49.450948  792944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:43:49.470862  792944 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027 for IP: 192.168.72.238
	I0920 19:43:49.470890  792944 certs.go:194] generating shared ca certs ...
	I0920 19:43:49.470914  792944 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:43:49.471101  792944 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 19:43:49.471163  792944 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 19:43:49.471176  792944 certs.go:256] generating profile certs ...
	I0920 19:43:49.471249  792944 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/client.key
	I0920 19:43:49.471271  792944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/client.crt with IP's: []
	I0920 19:43:49.803830  792944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/client.crt ...
	I0920 19:43:49.803868  792944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/client.crt: {Name:mk86887ea4e573315c1d5c416ea22dc330453ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:43:49.804067  792944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/client.key ...
	I0920 19:43:49.804085  792944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/client.key: {Name:mk84b8087dfaef62900339574430b365d08ab91a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:43:49.804197  792944 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.key.ee0a722d
	I0920 19:43:49.804223  792944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.crt.ee0a722d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.238]
	I0920 19:43:49.949123  792944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.crt.ee0a722d ...
	I0920 19:43:49.949153  792944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.crt.ee0a722d: {Name:mkc192f70bde237aac02d5cd75635d800e6334a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:43:49.949321  792944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.key.ee0a722d ...
	I0920 19:43:49.949342  792944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.key.ee0a722d: {Name:mk2a18839b2c6cc4620f4e869a7e7c80bec3a775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:43:49.949438  792944 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.crt.ee0a722d -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.crt
	I0920 19:43:49.949539  792944 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.key.ee0a722d -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.key
	I0920 19:43:49.949621  792944 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/proxy-client.key
	I0920 19:43:49.949639  792944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/proxy-client.crt with IP's: []
	I0920 19:43:50.019771  792944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/proxy-client.crt ...
	I0920 19:43:50.019800  792944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/proxy-client.crt: {Name:mk28090d014cbccc7ea6fa5f871a5c24e1c9a84d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:43:50.060922  792944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/proxy-client.key ...
	I0920 19:43:50.060965  792944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/proxy-client.key: {Name:mk466b74c548f4adfe649a9c04740c9d5553b6e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:43:50.061212  792944 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 19:43:50.061260  792944 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 19:43:50.061275  792944 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:43:50.061306  792944 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:43:50.061330  792944 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:43:50.061363  792944 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 19:43:50.061417  792944 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 19:43:50.062087  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:43:50.092065  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:43:50.120390  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:43:50.150599  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:43:50.175779  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 19:43:50.204049  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:43:50.231381  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:43:50.286210  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:43:50.321545  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 19:43:50.361992  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:43:50.394327  792944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 19:43:50.422503  792944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:43:50.442518  792944 ssh_runner.go:195] Run: openssl version
	I0920 19:43:50.449101  792944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 19:43:50.461762  792944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 19:43:50.467079  792944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 19:43:50.467145  792944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 19:43:50.473561  792944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:43:50.484626  792944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:43:50.495456  792944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:43:50.500201  792944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:43:50.500257  792944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:43:50.505998  792944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:43:50.517080  792944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 19:43:50.528181  792944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 19:43:50.532580  792944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 19:43:50.532635  792944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 19:43:50.538425  792944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 19:43:50.550329  792944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:43:50.554511  792944 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 19:43:50.554580  792944 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-220027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-220027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:43:50.554668  792944 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:43:50.554713  792944 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:43:50.592612  792944 cri.go:89] found id: ""
	I0920 19:43:50.592700  792944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:43:50.602974  792944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:43:50.612772  792944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:43:50.622426  792944 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:43:50.622448  792944 kubeadm.go:157] found existing configuration files:
	
	I0920 19:43:50.622495  792944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:43:50.631865  792944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:43:50.631934  792944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:43:50.641715  792944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:43:50.651308  792944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:43:50.651383  792944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:43:50.660848  792944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:43:50.669695  792944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:43:50.669762  792944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:43:50.678926  792944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:43:50.687946  792944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:43:50.688007  792944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:43:50.697394  792944 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:43:50.808499  792944 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:43:50.808747  792944 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:43:50.976457  792944 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:43:50.976583  792944 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:43:50.976686  792944 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:43:51.171031  792944 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:43:51.172999  792944 out.go:235]   - Generating certificates and keys ...
	I0920 19:43:51.173089  792944 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:43:51.173162  792944 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:43:51.266692  792944 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 19:43:51.399534  792944 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 19:43:51.450055  792944 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 19:43:51.640291  792944 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 19:43:51.699045  792944 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 19:43:51.699304  792944 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-220027 localhost] and IPs [192.168.72.238 127.0.0.1 ::1]
	I0920 19:43:52.084246  792944 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 19:43:52.084425  792944 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-220027 localhost] and IPs [192.168.72.238 127.0.0.1 ::1]
	I0920 19:43:52.326972  792944 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 19:43:52.587121  792944 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 19:43:52.982906  792944 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 19:43:52.982995  792944 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:43:53.680643  792944 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:43:54.006478  792944 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:43:54.391465  792944 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:43:54.468496  792944 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:43:54.490353  792944 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:43:54.502432  792944 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:43:54.502495  792944 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:43:54.695697  792944 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:43:54.697870  792944 out.go:235]   - Booting up control plane ...
	I0920 19:43:54.698046  792944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:43:54.701415  792944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:43:54.702765  792944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:43:54.703976  792944 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:43:54.716309  792944 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:44:34.682139  792944 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:44:34.682809  792944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:44:34.683012  792944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:44:39.682256  792944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:44:39.682516  792944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:44:49.681305  792944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:44:49.681512  792944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:45:09.681216  792944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:45:09.681494  792944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:45:49.680983  792944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:45:49.682694  792944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:45:49.682720  792944 kubeadm.go:310] 
	I0920 19:45:49.682903  792944 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:45:49.683062  792944 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:45:49.683080  792944 kubeadm.go:310] 
	I0920 19:45:49.683165  792944 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:45:49.683238  792944 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:45:49.683522  792944 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:45:49.683551  792944 kubeadm.go:310] 
	I0920 19:45:49.683862  792944 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:45:49.683952  792944 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:45:49.684031  792944 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:45:49.684041  792944 kubeadm.go:310] 
	I0920 19:45:49.684440  792944 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:45:49.684934  792944 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:45:49.684989  792944 kubeadm.go:310] 
	I0920 19:45:49.685155  792944 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:45:49.685281  792944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:45:49.685379  792944 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:45:49.685477  792944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:45:49.685504  792944 kubeadm.go:310] 
	I0920 19:45:49.685677  792944 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:45:49.685826  792944 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:45:49.686014  792944 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 19:45:49.686141  792944 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-220027 localhost] and IPs [192.168.72.238 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-220027 localhost] and IPs [192.168.72.238 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-220027 localhost] and IPs [192.168.72.238 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-220027 localhost] and IPs [192.168.72.238 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 19:45:49.686191  792944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 19:45:51.356133  792944 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.669916419s)
	I0920 19:45:51.356211  792944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:45:51.370282  792944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:45:51.381582  792944 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:45:51.381608  792944 kubeadm.go:157] found existing configuration files:
	
	I0920 19:45:51.381665  792944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:45:51.391622  792944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:45:51.391691  792944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:45:51.401171  792944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:45:51.409953  792944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:45:51.410017  792944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:45:51.419381  792944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:45:51.429099  792944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:45:51.429158  792944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:45:51.438462  792944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:45:51.447422  792944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:45:51.447478  792944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:45:51.459810  792944 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 19:45:51.690157  792944 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:47:47.796507  792944 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 19:47:47.796648  792944 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 19:47:47.798481  792944 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 19:47:47.798564  792944 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:47:47.798698  792944 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:47:47.798876  792944 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:47:47.799027  792944 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 19:47:47.799124  792944 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:47:47.800991  792944 out.go:235]   - Generating certificates and keys ...
	I0920 19:47:47.801098  792944 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:47:47.801187  792944 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:47:47.801309  792944 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 19:47:47.801394  792944 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 19:47:47.801487  792944 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 19:47:47.801573  792944 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 19:47:47.801663  792944 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 19:47:47.801752  792944 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 19:47:47.801888  792944 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 19:47:47.802018  792944 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 19:47:47.802085  792944 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 19:47:47.802166  792944 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:47:47.802215  792944 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:47:47.802278  792944 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:47:47.802358  792944 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:47:47.802425  792944 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:47:47.802591  792944 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:47:47.802729  792944 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:47:47.802791  792944 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:47:47.802921  792944 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:47:47.804571  792944 out.go:235]   - Booting up control plane ...
	I0920 19:47:47.804683  792944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:47:47.804776  792944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:47:47.804870  792944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:47:47.804983  792944 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:47:47.805225  792944 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 19:47:47.805301  792944 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 19:47:47.805397  792944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:47:47.805649  792944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:47:47.805750  792944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:47:47.806036  792944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:47:47.806134  792944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:47:47.806385  792944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:47:47.806492  792944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:47:47.806755  792944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:47:47.806841  792944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 19:47:47.807144  792944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 19:47:47.807177  792944 kubeadm.go:310] 
	I0920 19:47:47.807241  792944 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 19:47:47.807299  792944 kubeadm.go:310] 		timed out waiting for the condition
	I0920 19:47:47.807309  792944 kubeadm.go:310] 
	I0920 19:47:47.807365  792944 kubeadm.go:310] 	This error is likely caused by:
	I0920 19:47:47.807399  792944 kubeadm.go:310] 		- The kubelet is not running
	I0920 19:47:47.807560  792944 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 19:47:47.807580  792944 kubeadm.go:310] 
	I0920 19:47:47.807724  792944 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 19:47:47.807784  792944 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 19:47:47.807834  792944 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 19:47:47.807848  792944 kubeadm.go:310] 
	I0920 19:47:47.807996  792944 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 19:47:47.808146  792944 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 19:47:47.808166  792944 kubeadm.go:310] 
	I0920 19:47:47.808312  792944 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 19:47:47.808430  792944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 19:47:47.808542  792944 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 19:47:47.808645  792944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 19:47:47.808674  792944 kubeadm.go:310] 
	I0920 19:47:47.808725  792944 kubeadm.go:394] duration metric: took 3m57.254148774s to StartCluster
	I0920 19:47:47.808775  792944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:47:47.808828  792944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:47:47.859437  792944 cri.go:89] found id: ""
	I0920 19:47:47.859471  792944 logs.go:276] 0 containers: []
	W0920 19:47:47.859483  792944 logs.go:278] No container was found matching "kube-apiserver"
	I0920 19:47:47.859492  792944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:47:47.859561  792944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:47:47.901753  792944 cri.go:89] found id: ""
	I0920 19:47:47.901787  792944 logs.go:276] 0 containers: []
	W0920 19:47:47.901799  792944 logs.go:278] No container was found matching "etcd"
	I0920 19:47:47.901808  792944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:47:47.901883  792944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:47:47.941595  792944 cri.go:89] found id: ""
	I0920 19:47:47.941639  792944 logs.go:276] 0 containers: []
	W0920 19:47:47.941651  792944 logs.go:278] No container was found matching "coredns"
	I0920 19:47:47.941659  792944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:47:47.941730  792944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:47:47.986565  792944 cri.go:89] found id: ""
	I0920 19:47:47.986595  792944 logs.go:276] 0 containers: []
	W0920 19:47:47.986605  792944 logs.go:278] No container was found matching "kube-scheduler"
	I0920 19:47:47.986613  792944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:47:47.986675  792944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:47:48.023049  792944 cri.go:89] found id: ""
	I0920 19:47:48.023080  792944 logs.go:276] 0 containers: []
	W0920 19:47:48.023092  792944 logs.go:278] No container was found matching "kube-proxy"
	I0920 19:47:48.023099  792944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:47:48.023172  792944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:47:48.081246  792944 cri.go:89] found id: ""
	I0920 19:47:48.081281  792944 logs.go:276] 0 containers: []
	W0920 19:47:48.081294  792944 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 19:47:48.081304  792944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:47:48.081373  792944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:47:48.119546  792944 cri.go:89] found id: ""
	I0920 19:47:48.119581  792944 logs.go:276] 0 containers: []
	W0920 19:47:48.119593  792944 logs.go:278] No container was found matching "kindnet"
	I0920 19:47:48.119606  792944 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:47:48.119623  792944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:47:48.228305  792944 logs.go:123] Gathering logs for container status ...
	I0920 19:47:48.228345  792944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:47:48.280716  792944 logs.go:123] Gathering logs for kubelet ...
	I0920 19:47:48.280759  792944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:47:48.349962  792944 logs.go:123] Gathering logs for dmesg ...
	I0920 19:47:48.350002  792944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:47:48.366037  792944 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:47:48.366073  792944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 19:47:48.517612  792944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0920 19:47:48.517703  792944 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 19:47:48.517839  792944 out.go:270] * 
	* 
	W0920 19:47:48.518092  792944 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:47:48.518143  792944 out.go:270] * 
	* 
	W0920 19:47:48.519377  792944 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 19:47:48.522938  792944 out.go:201] 
	W0920 19:47:48.524247  792944 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 19:47:48.524290  792944 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 19:47:48.524331  792944 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 19:47:48.526033  792944 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-220027 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-220027
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-220027: (6.363305563s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-220027 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-220027 status --format={{.Host}}: exit status 7 (67.208902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-220027 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-220027 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.006813947s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-220027 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-220027 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-220027 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (92.015137ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-220027] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-220027
	    minikube start -p kubernetes-upgrade-220027 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2200272 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-220027 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-220027 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-220027 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (14.675630176s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-20 19:48:59.878995669 +0000 UTC m=+5791.724640265
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-220027 -n kubernetes-upgrade-220027
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-220027 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-220027 logs -n 25: (1.513637009s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-010370 sudo                               | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo cat                           | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo docker                        | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo                               | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo                               | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo cat                           | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo cat                           | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo                               | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo                               | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo                               | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo cat                           | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo cat                           | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo                               | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo                               | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo                               | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo find                          | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010370 sudo crio                          | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-010370                                    | kindnet-010370            | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	| start   | -p custom-flannel-010370                             | custom-flannel-010370     | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:48 UTC |
	|         | --memory=3072 --alsologtostderr                      |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                           |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-220027                         | kubernetes-upgrade-220027 | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:47 UTC |
	| start   | -p kubernetes-upgrade-220027                         | kubernetes-upgrade-220027 | jenkins | v1.34.0 | 20 Sep 24 19:47 UTC | 20 Sep 24 19:48 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p calico-010370 pgrep -a                            | calico-010370             | jenkins | v1.34.0 | 20 Sep 24 19:48 UTC | 20 Sep 24 19:48 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-220027                         | kubernetes-upgrade-220027 | jenkins | v1.34.0 | 20 Sep 24 19:48 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-220027                         | kubernetes-upgrade-220027 | jenkins | v1.34.0 | 20 Sep 24 19:48 UTC | 20 Sep 24 19:48 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-010370 pgrep                       | custom-flannel-010370     | jenkins | v1.34.0 | 20 Sep 24 19:48 UTC | 20 Sep 24 19:48 UTC |
	|         | -a kubelet                                           |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:48:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:48:45.250098  800156 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:48:45.250351  800156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:48:45.250361  800156 out.go:358] Setting ErrFile to fd 2...
	I0920 19:48:45.250366  800156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:48:45.250556  800156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 19:48:45.251158  800156 out.go:352] Setting JSON to false
	I0920 19:48:45.252207  800156 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12675,"bootTime":1726849050,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:48:45.252273  800156 start.go:139] virtualization: kvm guest
	I0920 19:48:45.254286  800156 out.go:177] * [kubernetes-upgrade-220027] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:48:45.255545  800156 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:48:45.255579  800156 notify.go:220] Checking for updates...
	I0920 19:48:45.258207  800156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:48:45.259710  800156 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 19:48:45.261000  800156 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 19:48:45.262326  800156 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:48:45.263789  800156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:48:45.265505  800156 config.go:182] Loaded profile config "kubernetes-upgrade-220027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:48:45.265905  800156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:48:45.265986  800156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:48:45.282305  800156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43179
	I0920 19:48:45.282779  800156 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:48:45.283345  800156 main.go:141] libmachine: Using API Version  1
	I0920 19:48:45.283365  800156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:48:45.283794  800156 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:48:45.283998  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:48:45.284306  800156 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:48:45.284791  800156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:48:45.284842  800156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:48:45.300645  800156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34873
	I0920 19:48:45.301203  800156 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:48:45.301677  800156 main.go:141] libmachine: Using API Version  1
	I0920 19:48:45.301699  800156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:48:45.302041  800156 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:48:45.302223  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:48:45.344600  800156 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:48:45.345915  800156 start.go:297] selected driver: kvm2
	I0920 19:48:45.345933  800156 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-220027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-220027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:48:45.346071  800156 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:48:45.346796  800156 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:48:45.346902  800156 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:48:45.362696  800156 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:48:45.363172  800156 cni.go:84] Creating CNI manager for ""
	I0920 19:48:45.363238  800156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:48:45.363281  800156 start.go:340] cluster config:
	{Name:kubernetes-upgrade-220027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-220027 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:48:45.363417  800156 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:48:45.365213  800156 out.go:177] * Starting "kubernetes-upgrade-220027" primary control-plane node in "kubernetes-upgrade-220027" cluster
	I0920 19:48:45.366339  800156 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:48:45.366383  800156 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 19:48:45.366397  800156 cache.go:56] Caching tarball of preloaded images
	I0920 19:48:45.366496  800156 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:48:45.366511  800156 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:48:45.366638  800156 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/config.json ...
	I0920 19:48:45.366897  800156 start.go:360] acquireMachinesLock for kubernetes-upgrade-220027: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:48:45.366975  800156 start.go:364] duration metric: took 55.253µs to acquireMachinesLock for "kubernetes-upgrade-220027"
	I0920 19:48:45.366994  800156 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:48:45.367002  800156 fix.go:54] fixHost starting: 
	I0920 19:48:45.367290  800156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:48:45.367324  800156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:48:45.384089  800156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39843
	I0920 19:48:45.384632  800156 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:48:45.385153  800156 main.go:141] libmachine: Using API Version  1
	I0920 19:48:45.385184  800156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:48:45.385550  800156 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:48:45.385766  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:48:45.385971  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetState
	I0920 19:48:45.387630  800156 fix.go:112] recreateIfNeeded on kubernetes-upgrade-220027: state=Running err=<nil>
	W0920 19:48:45.387662  800156 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:48:45.389521  800156 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-220027" VM ...
	I0920 19:48:44.605277  795968 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 19:48:44.605326  795968 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0920 19:48:44.514108  799298 pod_ready.go:103] pod "coredns-7c65d6cfc9-vfjlp" in "kube-system" namespace has status "Ready":"False"
	I0920 19:48:47.013266  799298 pod_ready.go:103] pod "coredns-7c65d6cfc9-vfjlp" in "kube-system" namespace has status "Ready":"False"
	I0920 19:48:45.390707  800156 machine.go:93] provisionDockerMachine start ...
	I0920 19:48:45.390734  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:48:45.391002  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:48:45.393709  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:45.394193  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:45.394221  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:45.394373  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:48:45.394581  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:45.394770  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:45.394961  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:48:45.395143  800156 main.go:141] libmachine: Using SSH client type: native
	I0920 19:48:45.395409  800156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0920 19:48:45.395428  800156 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:48:45.511316  800156 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-220027
	
	I0920 19:48:45.511348  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetMachineName
	I0920 19:48:45.511625  800156 buildroot.go:166] provisioning hostname "kubernetes-upgrade-220027"
	I0920 19:48:45.511661  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetMachineName
	I0920 19:48:45.511883  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:48:45.514539  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:45.514970  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:45.515008  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:45.515168  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:48:45.515301  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:45.515465  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:45.515570  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:48:45.515773  800156 main.go:141] libmachine: Using SSH client type: native
	I0920 19:48:45.515948  800156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0920 19:48:45.515961  800156 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-220027 && echo "kubernetes-upgrade-220027" | sudo tee /etc/hostname
	I0920 19:48:45.651810  800156 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-220027
	
	I0920 19:48:45.651843  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:48:45.655141  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:45.655706  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:45.655737  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:45.655971  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:48:45.656181  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:45.656350  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:45.656554  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:48:45.656759  800156 main.go:141] libmachine: Using SSH client type: native
	I0920 19:48:45.656981  800156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0920 19:48:45.657007  800156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-220027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-220027/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-220027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:48:45.772028  800156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:48:45.772071  800156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 19:48:45.772129  800156 buildroot.go:174] setting up certificates
	I0920 19:48:45.772143  800156 provision.go:84] configureAuth start
	I0920 19:48:45.772158  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetMachineName
	I0920 19:48:45.772456  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetIP
	I0920 19:48:45.775622  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:45.775998  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:45.776032  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:45.776225  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:48:45.778620  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:45.779057  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:45.779089  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:45.779201  800156 provision.go:143] copyHostCerts
	I0920 19:48:45.779269  800156 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 19:48:45.779290  800156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 19:48:45.779395  800156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 19:48:45.779499  800156 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 19:48:45.779507  800156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 19:48:45.779534  800156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 19:48:45.779589  800156 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 19:48:45.779594  800156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 19:48:45.779614  800156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 19:48:45.779663  800156 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-220027 san=[127.0.0.1 192.168.72.238 kubernetes-upgrade-220027 localhost minikube]
	I0920 19:48:45.912493  800156 provision.go:177] copyRemoteCerts
	I0920 19:48:45.912577  800156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:48:45.912612  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:48:45.915542  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:45.915927  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:45.915960  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:45.916192  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:48:45.916402  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:45.916594  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:48:45.916752  800156 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/id_rsa Username:docker}
	I0920 19:48:46.019093  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:48:46.055487  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0920 19:48:46.084985  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:48:46.145694  800156 provision.go:87] duration metric: took 373.534833ms to configureAuth
	I0920 19:48:46.145723  800156 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:48:46.145976  800156 config.go:182] Loaded profile config "kubernetes-upgrade-220027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:48:46.146070  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:48:46.149186  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:46.149554  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:46.149594  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:46.149741  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:48:46.149972  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:46.150168  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:46.150300  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:48:46.150432  800156 main.go:141] libmachine: Using SSH client type: native
	I0920 19:48:46.150666  800156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0920 19:48:46.150684  800156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:48:47.258533  800156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:48:47.258572  800156 machine.go:96] duration metric: took 1.867845444s to provisionDockerMachine
	I0920 19:48:47.258585  800156 start.go:293] postStartSetup for "kubernetes-upgrade-220027" (driver="kvm2")
	I0920 19:48:47.258613  800156 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:48:47.258644  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:48:47.259004  800156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:48:47.259037  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:48:47.262301  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:47.262715  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:47.262739  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:47.262925  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:48:47.263139  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:47.263326  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:48:47.263495  800156 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/id_rsa Username:docker}
	I0920 19:48:47.446900  800156 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:48:47.485341  800156 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:48:47.485466  800156 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 19:48:47.485579  800156 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 19:48:47.485661  800156 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 19:48:47.485827  800156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:48:47.566465  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 19:48:47.640816  800156 start.go:296] duration metric: took 382.210389ms for postStartSetup
	I0920 19:48:47.640868  800156 fix.go:56] duration metric: took 2.273864785s for fixHost
	I0920 19:48:47.640896  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:48:47.644149  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:47.644612  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:47.644689  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:47.644852  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:48:47.645108  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:47.645349  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:47.645517  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:48:47.645725  800156 main.go:141] libmachine: Using SSH client type: native
	I0920 19:48:47.645966  800156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.238 22 <nil> <nil>}
	I0920 19:48:47.645983  800156 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:48:47.987001  800156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726861727.972517235
	
	I0920 19:48:47.987029  800156 fix.go:216] guest clock: 1726861727.972517235
	I0920 19:48:47.987039  800156 fix.go:229] Guest: 2024-09-20 19:48:47.972517235 +0000 UTC Remote: 2024-09-20 19:48:47.640873626 +0000 UTC m=+2.433322667 (delta=331.643609ms)
	I0920 19:48:47.987067  800156 fix.go:200] guest clock delta is within tolerance: 331.643609ms
	I0920 19:48:47.987074  800156 start.go:83] releasing machines lock for "kubernetes-upgrade-220027", held for 2.620087396s
	I0920 19:48:47.987097  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:48:47.987394  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetIP
	I0920 19:48:47.990670  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:47.991189  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:47.991220  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:47.991390  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:48:47.992072  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:48:47.992266  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .DriverName
	I0920 19:48:47.992397  800156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:48:47.992443  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:48:47.992481  800156 ssh_runner.go:195] Run: cat /version.json
	I0920 19:48:47.992502  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHHostname
	I0920 19:48:47.995331  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:47.995636  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:47.995670  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:47.995686  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:47.996006  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:48:47.996179  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:47.996199  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:47.996245  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:47.996421  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:48:47.996468  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHPort
	I0920 19:48:47.996565  800156 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/id_rsa Username:docker}
	I0920 19:48:47.996624  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHKeyPath
	I0920 19:48:47.996795  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetSSHUsername
	I0920 19:48:47.997000  800156 sshutil.go:53] new ssh client: &{IP:192.168.72.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/kubernetes-upgrade-220027/id_rsa Username:docker}
	I0920 19:48:48.125790  800156 ssh_runner.go:195] Run: systemctl --version
	I0920 19:48:48.132965  800156 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:48:48.336567  800156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:48:48.391833  800156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:48:48.391932  800156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:48:48.408310  800156 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 19:48:48.408345  800156 start.go:495] detecting cgroup driver to use...
	I0920 19:48:48.408441  800156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:48:48.431731  800156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:48:48.453355  800156 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:48:48.453435  800156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:48:48.476732  800156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:48:48.495848  800156 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:48:48.709082  800156 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:48:48.920797  800156 docker.go:233] disabling docker service ...
	I0920 19:48:48.920880  800156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:48:48.940825  800156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:48:48.964867  800156 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:48:49.140740  800156 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:48:49.363538  800156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:48:49.383764  800156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:48:49.427382  800156 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:48:49.427470  800156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:48:49.446229  800156 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:48:49.446317  800156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:48:49.458882  800156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:48:49.472804  800156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:48:49.486456  800156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:48:49.500101  800156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:48:49.514695  800156 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:48:49.530453  800156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:48:49.543875  800156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:48:49.559176  800156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:48:49.571702  800156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:48:49.766662  800156 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:48:50.416617  800156 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:48:50.416698  800156 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:48:50.421688  800156 start.go:563] Will wait 60s for crictl version
	I0920 19:48:50.421778  800156 ssh_runner.go:195] Run: which crictl
	I0920 19:48:50.425849  800156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:48:50.464501  800156 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:48:50.464602  800156 ssh_runner.go:195] Run: crio --version
	I0920 19:48:50.496390  800156 ssh_runner.go:195] Run: crio --version
	I0920 19:48:50.529740  800156 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:48:49.605902  795968 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 19:48:49.605963  795968 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0920 19:48:50.440690  795968 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": read tcp 192.168.39.1:51976->192.168.39.60:8443: read: connection reset by peer
	I0920 19:48:50.440741  795968 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0920 19:48:50.441261  795968 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0920 19:48:50.601646  795968 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0920 19:48:50.602419  795968 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0920 19:48:51.102031  795968 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0920 19:48:51.102719  795968 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0920 19:48:51.602069  795968 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0920 19:48:51.602830  795968 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0920 19:48:49.013926  799298 pod_ready.go:103] pod "coredns-7c65d6cfc9-vfjlp" in "kube-system" namespace has status "Ready":"False"
	I0920 19:48:51.513133  799298 pod_ready.go:103] pod "coredns-7c65d6cfc9-vfjlp" in "kube-system" namespace has status "Ready":"False"
	I0920 19:48:52.512912  799298 pod_ready.go:93] pod "coredns-7c65d6cfc9-vfjlp" in "kube-system" namespace has status "Ready":"True"
	I0920 19:48:52.512941  799298 pod_ready.go:82] duration metric: took 12.508132652s for pod "coredns-7c65d6cfc9-vfjlp" in "kube-system" namespace to be "Ready" ...
	I0920 19:48:52.512953  799298 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-010370" in "kube-system" namespace to be "Ready" ...
	I0920 19:48:52.517535  799298 pod_ready.go:93] pod "etcd-custom-flannel-010370" in "kube-system" namespace has status "Ready":"True"
	I0920 19:48:52.517564  799298 pod_ready.go:82] duration metric: took 4.602543ms for pod "etcd-custom-flannel-010370" in "kube-system" namespace to be "Ready" ...
	I0920 19:48:52.517575  799298 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-010370" in "kube-system" namespace to be "Ready" ...
	I0920 19:48:52.521930  799298 pod_ready.go:93] pod "kube-apiserver-custom-flannel-010370" in "kube-system" namespace has status "Ready":"True"
	I0920 19:48:52.521954  799298 pod_ready.go:82] duration metric: took 4.370018ms for pod "kube-apiserver-custom-flannel-010370" in "kube-system" namespace to be "Ready" ...
	I0920 19:48:52.521967  799298 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-010370" in "kube-system" namespace to be "Ready" ...
	I0920 19:48:52.525955  799298 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-010370" in "kube-system" namespace has status "Ready":"True"
	I0920 19:48:52.525976  799298 pod_ready.go:82] duration metric: took 4.000576ms for pod "kube-controller-manager-custom-flannel-010370" in "kube-system" namespace to be "Ready" ...
	I0920 19:48:52.525988  799298 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-s59w8" in "kube-system" namespace to be "Ready" ...
	I0920 19:48:52.530188  799298 pod_ready.go:93] pod "kube-proxy-s59w8" in "kube-system" namespace has status "Ready":"True"
	I0920 19:48:52.530208  799298 pod_ready.go:82] duration metric: took 4.212979ms for pod "kube-proxy-s59w8" in "kube-system" namespace to be "Ready" ...
	I0920 19:48:52.530216  799298 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-010370" in "kube-system" namespace to be "Ready" ...
	I0920 19:48:52.910033  799298 pod_ready.go:93] pod "kube-scheduler-custom-flannel-010370" in "kube-system" namespace has status "Ready":"True"
	I0920 19:48:52.910068  799298 pod_ready.go:82] duration metric: took 379.841806ms for pod "kube-scheduler-custom-flannel-010370" in "kube-system" namespace to be "Ready" ...
	I0920 19:48:52.910082  799298 pod_ready.go:39] duration metric: took 12.913075144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:48:52.910098  799298 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:48:52.910164  799298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:48:52.926528  799298 api_server.go:72] duration metric: took 23.327506804s to wait for apiserver process to appear ...
	I0920 19:48:52.926562  799298 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:48:52.926587  799298 api_server.go:253] Checking apiserver healthz at https://192.168.50.88:8443/healthz ...
	I0920 19:48:52.930971  799298 api_server.go:279] https://192.168.50.88:8443/healthz returned 200:
	ok
	I0920 19:48:52.932027  799298 api_server.go:141] control plane version: v1.31.1
	I0920 19:48:52.932055  799298 api_server.go:131] duration metric: took 5.484629ms to wait for apiserver health ...
	I0920 19:48:52.932067  799298 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:48:53.113299  799298 system_pods.go:59] 7 kube-system pods found
	I0920 19:48:53.113335  799298 system_pods.go:61] "coredns-7c65d6cfc9-vfjlp" [cd666ecd-2ea0-4493-89c0-74ce253bc0d3] Running
	I0920 19:48:53.113342  799298 system_pods.go:61] "etcd-custom-flannel-010370" [81c0cb36-49ff-402a-8703-46835c88cfc8] Running
	I0920 19:48:53.113348  799298 system_pods.go:61] "kube-apiserver-custom-flannel-010370" [fe6ec38e-21fc-40e2-943d-c849b7e0487e] Running
	I0920 19:48:53.113354  799298 system_pods.go:61] "kube-controller-manager-custom-flannel-010370" [d1680853-2bdf-453f-877c-8159b8643e4c] Running
	I0920 19:48:53.113358  799298 system_pods.go:61] "kube-proxy-s59w8" [e6c174b7-0051-40e5-9d28-d3956a02f838] Running
	I0920 19:48:53.113362  799298 system_pods.go:61] "kube-scheduler-custom-flannel-010370" [d20775f4-9744-4ad7-b5b5-412fb722a6a7] Running
	I0920 19:48:53.113367  799298 system_pods.go:61] "storage-provisioner" [f3b66206-f161-4bc9-ba87-34bb721ebd75] Running
	I0920 19:48:53.113381  799298 system_pods.go:74] duration metric: took 181.300482ms to wait for pod list to return data ...
	I0920 19:48:53.113391  799298 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:48:53.309319  799298 default_sa.go:45] found service account: "default"
	I0920 19:48:53.309346  799298 default_sa.go:55] duration metric: took 195.94701ms for default service account to be created ...
	I0920 19:48:53.309359  799298 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:48:53.512032  799298 system_pods.go:86] 7 kube-system pods found
	I0920 19:48:53.512062  799298 system_pods.go:89] "coredns-7c65d6cfc9-vfjlp" [cd666ecd-2ea0-4493-89c0-74ce253bc0d3] Running
	I0920 19:48:53.512068  799298 system_pods.go:89] "etcd-custom-flannel-010370" [81c0cb36-49ff-402a-8703-46835c88cfc8] Running
	I0920 19:48:53.512072  799298 system_pods.go:89] "kube-apiserver-custom-flannel-010370" [fe6ec38e-21fc-40e2-943d-c849b7e0487e] Running
	I0920 19:48:53.512076  799298 system_pods.go:89] "kube-controller-manager-custom-flannel-010370" [d1680853-2bdf-453f-877c-8159b8643e4c] Running
	I0920 19:48:53.512079  799298 system_pods.go:89] "kube-proxy-s59w8" [e6c174b7-0051-40e5-9d28-d3956a02f838] Running
	I0920 19:48:53.512082  799298 system_pods.go:89] "kube-scheduler-custom-flannel-010370" [d20775f4-9744-4ad7-b5b5-412fb722a6a7] Running
	I0920 19:48:53.512085  799298 system_pods.go:89] "storage-provisioner" [f3b66206-f161-4bc9-ba87-34bb721ebd75] Running
	I0920 19:48:53.512093  799298 system_pods.go:126] duration metric: took 202.72692ms to wait for k8s-apps to be running ...
	I0920 19:48:53.512100  799298 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:48:53.512152  799298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:48:53.526688  799298 system_svc.go:56] duration metric: took 14.575097ms WaitForService to wait for kubelet
	I0920 19:48:53.526722  799298 kubeadm.go:582] duration metric: took 23.927705684s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:48:53.526745  799298 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:48:53.710477  799298 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 19:48:53.710519  799298 node_conditions.go:123] node cpu capacity is 2
	I0920 19:48:53.710538  799298 node_conditions.go:105] duration metric: took 183.786044ms to run NodePressure ...
	I0920 19:48:53.710554  799298 start.go:241] waiting for startup goroutines ...
	I0920 19:48:53.710563  799298 start.go:246] waiting for cluster config update ...
	I0920 19:48:53.710580  799298 start.go:255] writing updated cluster config ...
	I0920 19:48:53.711008  799298 ssh_runner.go:195] Run: rm -f paused
	I0920 19:48:53.770799  799298 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:48:53.772896  799298 out.go:177] * Done! kubectl is now configured to use "custom-flannel-010370" cluster and "default" namespace by default
	I0920 19:48:50.531052  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) Calling .GetIP
	I0920 19:48:50.533872  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:50.534243  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:b6:97", ip: ""} in network mk-kubernetes-upgrade-220027: {Iface:virbr4 ExpiryTime:2024-09-20 20:48:19 +0000 UTC Type:0 Mac:52:54:00:09:b6:97 Iaid: IPaddr:192.168.72.238 Prefix:24 Hostname:kubernetes-upgrade-220027 Clientid:01:52:54:00:09:b6:97}
	I0920 19:48:50.534277  800156 main.go:141] libmachine: (kubernetes-upgrade-220027) DBG | domain kubernetes-upgrade-220027 has defined IP address 192.168.72.238 and MAC address 52:54:00:09:b6:97 in network mk-kubernetes-upgrade-220027
	I0920 19:48:50.534471  800156 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 19:48:50.538684  800156 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-220027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-220027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:48:50.538778  800156 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:48:50.538834  800156 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:48:50.583554  800156 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:48:50.583578  800156 crio.go:433] Images already preloaded, skipping extraction
	I0920 19:48:50.583628  800156 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:48:50.617810  800156 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:48:50.617832  800156 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:48:50.617840  800156 kubeadm.go:934] updating node { 192.168.72.238 8443 v1.31.1 crio true true} ...
	I0920 19:48:50.617941  800156 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-220027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-220027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:48:50.618003  800156 ssh_runner.go:195] Run: crio config
	I0920 19:48:50.671465  800156 cni.go:84] Creating CNI manager for ""
	I0920 19:48:50.671487  800156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:48:50.671498  800156 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:48:50.671521  800156 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.238 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-220027 NodeName:kubernetes-upgrade-220027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:48:50.671674  800156 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-220027"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:48:50.671738  800156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:48:50.682693  800156 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:48:50.682757  800156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:48:50.693327  800156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0920 19:48:50.710529  800156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:48:50.730966  800156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0920 19:48:50.751574  800156 ssh_runner.go:195] Run: grep 192.168.72.238	control-plane.minikube.internal$ /etc/hosts
	I0920 19:48:50.757188  800156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:48:50.890325  800156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:48:50.906441  800156 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027 for IP: 192.168.72.238
	I0920 19:48:50.906469  800156 certs.go:194] generating shared ca certs ...
	I0920 19:48:50.906489  800156 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:48:50.906689  800156 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 19:48:50.906728  800156 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 19:48:50.906739  800156 certs.go:256] generating profile certs ...
	I0920 19:48:50.906825  800156 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/client.key
	I0920 19:48:50.906913  800156 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.key.ee0a722d
	I0920 19:48:50.906967  800156 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/proxy-client.key
	I0920 19:48:50.907117  800156 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 19:48:50.907151  800156 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 19:48:50.907161  800156 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:48:50.907183  800156 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:48:50.907205  800156 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:48:50.907268  800156 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 19:48:50.907319  800156 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 19:48:50.907930  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:48:50.935249  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:48:50.959337  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:48:50.982921  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:48:51.007331  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 19:48:51.034355  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:48:51.060703  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:48:51.092355  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kubernetes-upgrade-220027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:48:51.122499  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 19:48:51.147317  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:48:51.173429  800156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 19:48:51.204373  800156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:48:51.222919  800156 ssh_runner.go:195] Run: openssl version
	I0920 19:48:51.230525  800156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:48:51.241962  800156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:48:51.246795  800156 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:48:51.246898  800156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:48:51.253839  800156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:48:51.264207  800156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 19:48:51.276761  800156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 19:48:51.281911  800156 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 19:48:51.282016  800156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 19:48:51.288461  800156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 19:48:51.298503  800156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 19:48:51.310046  800156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 19:48:51.314771  800156 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 19:48:51.314824  800156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 19:48:51.321710  800156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:48:51.334080  800156 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:48:51.338783  800156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:48:51.344995  800156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:48:51.351929  800156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:48:51.359510  800156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:48:51.365466  800156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:48:51.372883  800156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:48:51.379881  800156 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-220027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-220027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:48:51.379984  800156 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:48:51.380035  800156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:48:51.488159  800156 cri.go:89] found id: "5f8a31aa920b3352f7e597052843374890d3c3fac8c71549cf8b716bc1df76f6"
	I0920 19:48:51.488186  800156 cri.go:89] found id: "5f3ddba10c015e57d38029972ba39a185984356ce6aa4a2b241c002f5cbd7e12"
	I0920 19:48:51.488190  800156 cri.go:89] found id: "8b2956f056656d9f2a46341052ba0d0590cab073d235551962ae0ef42ddca4c3"
	I0920 19:48:51.488193  800156 cri.go:89] found id: "84b4925f63a796b9e51e610327646486ded37c64aa3e7e9c2aae65261b4aa1b4"
	I0920 19:48:51.488196  800156 cri.go:89] found id: "2be8abbf2f8d07032c53a8ec40011ab35b2334381232564b71f0a843ec5158f1"
	I0920 19:48:51.488199  800156 cri.go:89] found id: "245de0ca109b2774739af91db2399ed4350da598f657793a70b55c4b65765de4"
	I0920 19:48:51.488202  800156 cri.go:89] found id: "017fc33c07d2eb54bb38dbf237d75b16fbaaf8d1df109a50709f331da5d764a9"
	I0920 19:48:51.488204  800156 cri.go:89] found id: "c19f97f1ace0d973ec0a4c34816cd75bfd0e8b41a0b2434f34d5bd86e09f51ba"
	I0920 19:48:51.488207  800156 cri.go:89] found id: "802dd64ade9a962d0f046809c9fae734b6bb6dba843f7ccae56a73c0801149f4"
	I0920 19:48:51.488212  800156 cri.go:89] found id: "6a877e3848a5820336bf4c679c5da53ee29d0f345b085ef4bdf7ca3f2a8acefa"
	I0920 19:48:51.488214  800156 cri.go:89] found id: ""
	I0920 19:48:51.488265  800156 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.657583824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861740657560778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87411528-d422-4f24-91df-a04090167b53 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.658142773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=163ee8c5-fe45-42c0-b39f-f795a3881a62 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.658192231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=163ee8c5-fe45-42c0-b39f-f795a3881a62 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.658500410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a9ba953ebda88c56df3e15b695e8c2f67404a632991909df19f6fdac70a2f029,PodSandboxId:32c339565590090caf68523156e3341a6aa57b7833dacdb802e5217b51eaa0ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726861738020967183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28f47320-fdc7-4685-960e-750477c59154,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20bdae76a78841f0e1c76e00e3ffcc2fac9de01e756b56643f46476f167d4ada,PodSandboxId:2ebccabc883ed30327c250aebe33bf965db2cf7a63d3c5e19534d39019a9aad4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726861738155978773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mdzpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b3ad726-e630-407e-bee5-d422ff8a4ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e549838fa3be7cae61b692415063da6f1e9bed14382a73366210fd24cabb8d6,PodSandboxId:dd6f991c22d40164a8dd4cdffc89f50c371ba9d9e7e30bb9d66809dfecd77acc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726861738110461611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bqvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 18a0798b-7ceb-4ec9-9b15-e9e8171bc4ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8f359c235927f1f17995275fc7f7cb3d6ea32841705ce865e9cf69656877ac,PodSandboxId:d8ed6fa7f38eda54b90d751fa6cd58c0a284355b57542681fc1eec8203f34e4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726861737570758126,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cg5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e66d70-2255-4e0d-b8c2-9d843b06ebf1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d992295b5ea0b59d7c0d70643fc2b23c5d7cab237d0b1420990335616d5eb2,PodSandboxId:7ee098703ffe49b9f78ae5dc149d22f40f09f863deaf4bc6a5260df43d9b2ee7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726861733732335671,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b551e84eee2e388124c1768210d1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:996b8852b7eed0d44744440ab2da6e724ca4f40b5f6264bcf1943f73b3386af5,PodSandboxId:7358f781bad701e61fc4c294906cf46e4493ae88930378eb40ad495b89fe86c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726861733722054446,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ef7317bc0c50410f2b0571a7032713,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd131240137d50541f7886e2dc71e4445083fd46fe000da743d9b2f360d1a05d,PodSandboxId:91febcb2a0643691cd679e46611020ddc09d0c2abc9567baba9c02ec89478264,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726861733709163552,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e71afed3ddd0cc3ec690ab9d7c29b77,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177a1b92872b0b550acfc8c318b3221173b3ab7e9cfd3b9718050d85ee8d872,PodSandboxId:9c177d7c100cd6112ddcebd50bf7ddb367b981af25ddbf7815dbd646e0b7b415,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726861733700336477,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd9c0a010d2b4ccc5de66c632dbcdbd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8a31aa920b3352f7e597052843374890d3c3fac8c71549cf8b716bc1df76f6,PodSandboxId:d25668123875a1f0e01af3e14326da7c759c008181b00ea5e74f454c0d68ebb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726861727745389759,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cg5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e66d70-2255-4e0d-b8c2-9d843b06ebf1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3ddba10c015e57d38029972ba39a185984356ce6aa4a2b241c002f5cbd7e12,PodSandboxId:d8c288ef27919161951bf8e1f1d8d198d03084525a7c50bc15af99c1fef23b22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726861727738624421,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b551e84eee2e388124c1768210d1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2956f056656d9f2a46341052ba0d0590cab073d235551962ae0ef42ddca4c3,PodSandboxId:a5a4a55a1f3c9a31cfc3362808161498f12da90a90b53b224c29e92039774c8b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726861727649913852,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e71afed3ddd0cc3ec690ab9d7c29b77,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84b4925f63a796b9e51e610327646486ded37c64aa3e7e9c2aae65261b4aa1b4,PodSandboxId:5e781b88e5287387bdf4e38c92b2684f1ceeb8c8143169ca8161c339a97843f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726861727627487547,Labels:map[string]string{io.kubernetes.container.name
: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ef7317bc0c50410f2b0571a7032713,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2be8abbf2f8d07032c53a8ec40011ab35b2334381232564b71f0a843ec5158f1,PodSandboxId:aab775cbce1f20479f57ca2fbe3259ddce70fcd53580e441103015ee43a0fbba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726861727541828705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd9c0a010d2b4ccc5de66c632dbcdbd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=163ee8c5-fe45-42c0-b39f-f795a3881a62 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.709384451Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6246b8c5-d1fa-4698-9570-e2814f732df1 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.709457791Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6246b8c5-d1fa-4698-9570-e2814f732df1 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.710727689Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=544d6c31-9ed4-4767-8e7f-60335e3b969b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.711298645Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861740711271686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=544d6c31-9ed4-4767-8e7f-60335e3b969b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.712191430Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f0ab2bc-6718-4f04-a3d4-ce5067465d4a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.712288107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f0ab2bc-6718-4f04-a3d4-ce5067465d4a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.712626082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a9ba953ebda88c56df3e15b695e8c2f67404a632991909df19f6fdac70a2f029,PodSandboxId:32c339565590090caf68523156e3341a6aa57b7833dacdb802e5217b51eaa0ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726861738020967183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28f47320-fdc7-4685-960e-750477c59154,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20bdae76a78841f0e1c76e00e3ffcc2fac9de01e756b56643f46476f167d4ada,PodSandboxId:2ebccabc883ed30327c250aebe33bf965db2cf7a63d3c5e19534d39019a9aad4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726861738155978773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mdzpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b3ad726-e630-407e-bee5-d422ff8a4ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e549838fa3be7cae61b692415063da6f1e9bed14382a73366210fd24cabb8d6,PodSandboxId:dd6f991c22d40164a8dd4cdffc89f50c371ba9d9e7e30bb9d66809dfecd77acc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726861738110461611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bqvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 18a0798b-7ceb-4ec9-9b15-e9e8171bc4ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8f359c235927f1f17995275fc7f7cb3d6ea32841705ce865e9cf69656877ac,PodSandboxId:d8ed6fa7f38eda54b90d751fa6cd58c0a284355b57542681fc1eec8203f34e4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726861737570758126,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cg5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e66d70-2255-4e0d-b8c2-9d843b06ebf1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d992295b5ea0b59d7c0d70643fc2b23c5d7cab237d0b1420990335616d5eb2,PodSandboxId:7ee098703ffe49b9f78ae5dc149d22f40f09f863deaf4bc6a5260df43d9b2ee7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726861733732335671,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b551e84eee2e388124c1768210d1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:996b8852b7eed0d44744440ab2da6e724ca4f40b5f6264bcf1943f73b3386af5,PodSandboxId:7358f781bad701e61fc4c294906cf46e4493ae88930378eb40ad495b89fe86c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726861733722054446,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ef7317bc0c50410f2b0571a7032713,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd131240137d50541f7886e2dc71e4445083fd46fe000da743d9b2f360d1a05d,PodSandboxId:91febcb2a0643691cd679e46611020ddc09d0c2abc9567baba9c02ec89478264,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726861733709163552,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e71afed3ddd0cc3ec690ab9d7c29b77,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177a1b92872b0b550acfc8c318b3221173b3ab7e9cfd3b9718050d85ee8d872,PodSandboxId:9c177d7c100cd6112ddcebd50bf7ddb367b981af25ddbf7815dbd646e0b7b415,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726861733700336477,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd9c0a010d2b4ccc5de66c632dbcdbd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8a31aa920b3352f7e597052843374890d3c3fac8c71549cf8b716bc1df76f6,PodSandboxId:d25668123875a1f0e01af3e14326da7c759c008181b00ea5e74f454c0d68ebb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726861727745389759,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cg5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e66d70-2255-4e0d-b8c2-9d843b06ebf1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3ddba10c015e57d38029972ba39a185984356ce6aa4a2b241c002f5cbd7e12,PodSandboxId:d8c288ef27919161951bf8e1f1d8d198d03084525a7c50bc15af99c1fef23b22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726861727738624421,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b551e84eee2e388124c1768210d1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2956f056656d9f2a46341052ba0d0590cab073d235551962ae0ef42ddca4c3,PodSandboxId:a5a4a55a1f3c9a31cfc3362808161498f12da90a90b53b224c29e92039774c8b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726861727649913852,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e71afed3ddd0cc3ec690ab9d7c29b77,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84b4925f63a796b9e51e610327646486ded37c64aa3e7e9c2aae65261b4aa1b4,PodSandboxId:5e781b88e5287387bdf4e38c92b2684f1ceeb8c8143169ca8161c339a97843f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726861727627487547,Labels:map[string]string{io.kubernetes.container.name
: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ef7317bc0c50410f2b0571a7032713,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2be8abbf2f8d07032c53a8ec40011ab35b2334381232564b71f0a843ec5158f1,PodSandboxId:aab775cbce1f20479f57ca2fbe3259ddce70fcd53580e441103015ee43a0fbba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726861727541828705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd9c0a010d2b4ccc5de66c632dbcdbd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f0ab2bc-6718-4f04-a3d4-ce5067465d4a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.758676911Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eea3bb94-011d-44e8-866e-37e872e07270 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.758753125Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eea3bb94-011d-44e8-866e-37e872e07270 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.761214169Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf371ee8-cf29-448c-b7ca-3149609a6b6f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.761645843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861740761619868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf371ee8-cf29-448c-b7ca-3149609a6b6f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.762287428Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edc06fef-a6df-45a7-a42d-23a2daa291cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.762511972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edc06fef-a6df-45a7-a42d-23a2daa291cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.763119728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a9ba953ebda88c56df3e15b695e8c2f67404a632991909df19f6fdac70a2f029,PodSandboxId:32c339565590090caf68523156e3341a6aa57b7833dacdb802e5217b51eaa0ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726861738020967183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28f47320-fdc7-4685-960e-750477c59154,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20bdae76a78841f0e1c76e00e3ffcc2fac9de01e756b56643f46476f167d4ada,PodSandboxId:2ebccabc883ed30327c250aebe33bf965db2cf7a63d3c5e19534d39019a9aad4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726861738155978773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mdzpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b3ad726-e630-407e-bee5-d422ff8a4ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e549838fa3be7cae61b692415063da6f1e9bed14382a73366210fd24cabb8d6,PodSandboxId:dd6f991c22d40164a8dd4cdffc89f50c371ba9d9e7e30bb9d66809dfecd77acc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726861738110461611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bqvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 18a0798b-7ceb-4ec9-9b15-e9e8171bc4ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8f359c235927f1f17995275fc7f7cb3d6ea32841705ce865e9cf69656877ac,PodSandboxId:d8ed6fa7f38eda54b90d751fa6cd58c0a284355b57542681fc1eec8203f34e4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726861737570758126,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cg5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e66d70-2255-4e0d-b8c2-9d843b06ebf1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d992295b5ea0b59d7c0d70643fc2b23c5d7cab237d0b1420990335616d5eb2,PodSandboxId:7ee098703ffe49b9f78ae5dc149d22f40f09f863deaf4bc6a5260df43d9b2ee7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726861733732335671,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b551e84eee2e388124c1768210d1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:996b8852b7eed0d44744440ab2da6e724ca4f40b5f6264bcf1943f73b3386af5,PodSandboxId:7358f781bad701e61fc4c294906cf46e4493ae88930378eb40ad495b89fe86c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726861733722054446,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ef7317bc0c50410f2b0571a7032713,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd131240137d50541f7886e2dc71e4445083fd46fe000da743d9b2f360d1a05d,PodSandboxId:91febcb2a0643691cd679e46611020ddc09d0c2abc9567baba9c02ec89478264,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726861733709163552,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e71afed3ddd0cc3ec690ab9d7c29b77,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177a1b92872b0b550acfc8c318b3221173b3ab7e9cfd3b9718050d85ee8d872,PodSandboxId:9c177d7c100cd6112ddcebd50bf7ddb367b981af25ddbf7815dbd646e0b7b415,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726861733700336477,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd9c0a010d2b4ccc5de66c632dbcdbd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8a31aa920b3352f7e597052843374890d3c3fac8c71549cf8b716bc1df76f6,PodSandboxId:d25668123875a1f0e01af3e14326da7c759c008181b00ea5e74f454c0d68ebb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726861727745389759,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cg5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e66d70-2255-4e0d-b8c2-9d843b06ebf1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3ddba10c015e57d38029972ba39a185984356ce6aa4a2b241c002f5cbd7e12,PodSandboxId:d8c288ef27919161951bf8e1f1d8d198d03084525a7c50bc15af99c1fef23b22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726861727738624421,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b551e84eee2e388124c1768210d1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2956f056656d9f2a46341052ba0d0590cab073d235551962ae0ef42ddca4c3,PodSandboxId:a5a4a55a1f3c9a31cfc3362808161498f12da90a90b53b224c29e92039774c8b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726861727649913852,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e71afed3ddd0cc3ec690ab9d7c29b77,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84b4925f63a796b9e51e610327646486ded37c64aa3e7e9c2aae65261b4aa1b4,PodSandboxId:5e781b88e5287387bdf4e38c92b2684f1ceeb8c8143169ca8161c339a97843f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726861727627487547,Labels:map[string]string{io.kubernetes.container.name
: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ef7317bc0c50410f2b0571a7032713,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2be8abbf2f8d07032c53a8ec40011ab35b2334381232564b71f0a843ec5158f1,PodSandboxId:aab775cbce1f20479f57ca2fbe3259ddce70fcd53580e441103015ee43a0fbba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726861727541828705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd9c0a010d2b4ccc5de66c632dbcdbd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edc06fef-a6df-45a7-a42d-23a2daa291cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.808532545Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d667cb26-9aa4-4915-8042-8dba0c11c710 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.808607848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d667cb26-9aa4-4915-8042-8dba0c11c710 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.809932057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dedee08-d0f4-4126-9297-bd2e3957c3c6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.810417518Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861740810394861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dedee08-d0f4-4126-9297-bd2e3957c3c6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.810875359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7301645-2921-4297-b326-f2cfda62ca28 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.810929024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7301645-2921-4297-b326-f2cfda62ca28 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:49:00 kubernetes-upgrade-220027 crio[2094]: time="2024-09-20 19:49:00.811292272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a9ba953ebda88c56df3e15b695e8c2f67404a632991909df19f6fdac70a2f029,PodSandboxId:32c339565590090caf68523156e3341a6aa57b7833dacdb802e5217b51eaa0ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726861738020967183,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28f47320-fdc7-4685-960e-750477c59154,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20bdae76a78841f0e1c76e00e3ffcc2fac9de01e756b56643f46476f167d4ada,PodSandboxId:2ebccabc883ed30327c250aebe33bf965db2cf7a63d3c5e19534d39019a9aad4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726861738155978773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mdzpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b3ad726-e630-407e-bee5-d422ff8a4ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e549838fa3be7cae61b692415063da6f1e9bed14382a73366210fd24cabb8d6,PodSandboxId:dd6f991c22d40164a8dd4cdffc89f50c371ba9d9e7e30bb9d66809dfecd77acc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726861738110461611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4bqvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 18a0798b-7ceb-4ec9-9b15-e9e8171bc4ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8f359c235927f1f17995275fc7f7cb3d6ea32841705ce865e9cf69656877ac,PodSandboxId:d8ed6fa7f38eda54b90d751fa6cd58c0a284355b57542681fc1eec8203f34e4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726861737570758126,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cg5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e66d70-2255-4e0d-b8c2-9d843b06ebf1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d992295b5ea0b59d7c0d70643fc2b23c5d7cab237d0b1420990335616d5eb2,PodSandboxId:7ee098703ffe49b9f78ae5dc149d22f40f09f863deaf4bc6a5260df43d9b2ee7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726861733732335671,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b551e84eee2e388124c1768210d1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:996b8852b7eed0d44744440ab2da6e724ca4f40b5f6264bcf1943f73b3386af5,PodSandboxId:7358f781bad701e61fc4c294906cf46e4493ae88930378eb40ad495b89fe86c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726861733722054446,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ef7317bc0c50410f2b0571a7032713,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd131240137d50541f7886e2dc71e4445083fd46fe000da743d9b2f360d1a05d,PodSandboxId:91febcb2a0643691cd679e46611020ddc09d0c2abc9567baba9c02ec89478264,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726861733709163552,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e71afed3ddd0cc3ec690ab9d7c29b77,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9177a1b92872b0b550acfc8c318b3221173b3ab7e9cfd3b9718050d85ee8d872,PodSandboxId:9c177d7c100cd6112ddcebd50bf7ddb367b981af25ddbf7815dbd646e0b7b415,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726861733700336477,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd9c0a010d2b4ccc5de66c632dbcdbd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8a31aa920b3352f7e597052843374890d3c3fac8c71549cf8b716bc1df76f6,PodSandboxId:d25668123875a1f0e01af3e14326da7c759c008181b00ea5e74f454c0d68ebb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726861727745389759,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cg5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e66d70-2255-4e0d-b8c2-9d843b06ebf1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f3ddba10c015e57d38029972ba39a185984356ce6aa4a2b241c002f5cbd7e12,PodSandboxId:d8c288ef27919161951bf8e1f1d8d198d03084525a7c50bc15af99c1fef23b22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726861727738624421,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0b551e84eee2e388124c1768210d1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2956f056656d9f2a46341052ba0d0590cab073d235551962ae0ef42ddca4c3,PodSandboxId:a5a4a55a1f3c9a31cfc3362808161498f12da90a90b53b224c29e92039774c8b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726861727649913852,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e71afed3ddd0cc3ec690ab9d7c29b77,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84b4925f63a796b9e51e610327646486ded37c64aa3e7e9c2aae65261b4aa1b4,PodSandboxId:5e781b88e5287387bdf4e38c92b2684f1ceeb8c8143169ca8161c339a97843f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726861727627487547,Labels:map[string]string{io.kubernetes.container.name
: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26ef7317bc0c50410f2b0571a7032713,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2be8abbf2f8d07032c53a8ec40011ab35b2334381232564b71f0a843ec5158f1,PodSandboxId:aab775cbce1f20479f57ca2fbe3259ddce70fcd53580e441103015ee43a0fbba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726861727541828705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-kubernetes-upgrade-220027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd9c0a010d2b4ccc5de66c632dbcdbd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7301645-2921-4297-b326-f2cfda62ca28 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	20bdae76a7884       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 seconds ago       Running             coredns                   0                   2ebccabc883ed       coredns-7c65d6cfc9-mdzpf
	9e549838fa3be       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 seconds ago       Running             coredns                   0                   dd6f991c22d40       coredns-7c65d6cfc9-4bqvz
	a9ba953ebda88       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 seconds ago       Running             storage-provisioner       0                   32c3395655900       storage-provisioner
	0e8f359c23592       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago       Running             kube-proxy                2                   d8ed6fa7f38ed       kube-proxy-5cg5m
	f6d992295b5ea       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            2                   7ee098703ffe4       kube-scheduler-kubernetes-upgrade-220027
	996b8852b7eed       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   7358f781bad70       etcd-kubernetes-upgrade-220027
	dd131240137d5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   2                   91febcb2a0643       kube-controller-manager-kubernetes-upgrade-220027
	9177a1b92872b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            2                   9c177d7c100cd       kube-apiserver-kubernetes-upgrade-220027
	5f8a31aa920b3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   13 seconds ago      Exited              kube-proxy                1                   d25668123875a       kube-proxy-5cg5m
	5f3ddba10c015       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   13 seconds ago      Exited              kube-scheduler            1                   d8c288ef27919       kube-scheduler-kubernetes-upgrade-220027
	8b2956f056656       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   13 seconds ago      Exited              kube-controller-manager   1                   a5a4a55a1f3c9       kube-controller-manager-kubernetes-upgrade-220027
	84b4925f63a79       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   13 seconds ago      Exited              etcd                      1                   5e781b88e5287       etcd-kubernetes-upgrade-220027
	2be8abbf2f8d0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   13 seconds ago      Exited              kube-apiserver            1                   aab775cbce1f2       kube-apiserver-kubernetes-upgrade-220027
	
	
	==> coredns [20bdae76a78841f0e1c76e00e3ffcc2fac9de01e756b56643f46476f167d4ada] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	
	
	==> coredns [9e549838fa3be7cae61b692415063da6f1e9bed14382a73366210fd24cabb8d6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-220027
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-220027
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:48:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-220027
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:48:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:48:56 +0000   Fri, 20 Sep 2024 19:48:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:48:56 +0000   Fri, 20 Sep 2024 19:48:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:48:56 +0000   Fri, 20 Sep 2024 19:48:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:48:56 +0000   Fri, 20 Sep 2024 19:48:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.238
	  Hostname:    kubernetes-upgrade-220027
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b937ab146b9e42eca9ee152dd825ed64
	  System UUID:                b937ab14-6b9e-42ec-a9ee-152dd825ed64
	  Boot ID:                    c6d0b19d-2d21-4798-9a3b-d797655c7fcb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-4bqvz                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16s
	  kube-system                 coredns-7c65d6cfc9-mdzpf                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16s
	  kube-system                 etcd-kubernetes-upgrade-220027                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19s
	  kube-system                 kube-apiserver-kubernetes-upgrade-220027             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-220027    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-proxy-5cg5m                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-scheduler-kubernetes-upgrade-220027             100m (5%)     0 (0%)      0 (0%)           0 (0%)         1s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26s (x8 over 27s)  kubelet          Node kubernetes-upgrade-220027 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 27s)  kubelet          Node kubernetes-upgrade-220027 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 27s)  kubelet          Node kubernetes-upgrade-220027 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16s                node-controller  Node kubernetes-upgrade-220027 event: Registered Node kubernetes-upgrade-220027 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-220027 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-220027 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-220027 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-220027 event: Registered Node kubernetes-upgrade-220027 in Controller
	
	
	==> dmesg <==
	[  +2.737187] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.506535] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.160872] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.058567] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054956] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.165202] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.156904] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.300735] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +4.485435] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +0.063237] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.431338] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +9.409756] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.115887] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.309775] systemd-fstab-generator[2000]: Ignoring "noauto" option for root device
	[  +0.234246] systemd-fstab-generator[2016]: Ignoring "noauto" option for root device
	[  +0.239196] systemd-fstab-generator[2031]: Ignoring "noauto" option for root device
	[  +0.163832] systemd-fstab-generator[2043]: Ignoring "noauto" option for root device
	[  +0.142736] kauditd_printk_skb: 164 callbacks suppressed
	[  +0.292429] systemd-fstab-generator[2084]: Ignoring "noauto" option for root device
	[  +1.161041] systemd-fstab-generator[2293]: Ignoring "noauto" option for root device
	[  +2.170353] systemd-fstab-generator[2594]: Ignoring "noauto" option for root device
	[  +4.628318] kauditd_printk_skb: 154 callbacks suppressed
	[  +1.329760] systemd-fstab-generator[3333]: Ignoring "noauto" option for root device
	
	
	==> etcd [84b4925f63a796b9e51e610327646486ded37c64aa3e7e9c2aae65261b4aa1b4] <==
	{"level":"info","ts":"2024-09-20T19:48:49.517890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T19:48:49.517933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 received MsgPreVoteResp from e2f0763a23b2a427 at term 2"}
	{"level":"info","ts":"2024-09-20T19:48:49.517949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T19:48:49.517958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 received MsgVoteResp from e2f0763a23b2a427 at term 3"}
	{"level":"info","ts":"2024-09-20T19:48:49.517970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T19:48:49.517980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e2f0763a23b2a427 elected leader e2f0763a23b2a427 at term 3"}
	{"level":"info","ts":"2024-09-20T19:48:49.523349Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e2f0763a23b2a427","local-member-attributes":"{Name:kubernetes-upgrade-220027 ClientURLs:[https://192.168.72.238:2379]}","request-path":"/0/members/e2f0763a23b2a427/attributes","cluster-id":"fce591e0af426ce5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T19:48:49.524093Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:48:49.524707Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:48:49.526640Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:48:49.531454Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.238:2379"}
	{"level":"info","ts":"2024-09-20T19:48:49.533859Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:48:49.536189Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:48:49.538113Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:48:49.539388Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:48:49.789976Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T19:48:49.792397Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-220027","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.238:2380"],"advertise-client-urls":["https://192.168.72.238:2379"]}
	{"level":"warn","ts":"2024-09-20T19:48:49.792624Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.238:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T19:48:49.792728Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.238:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T19:48:49.792925Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T19:48:49.793201Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T19:48:49.799195Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e2f0763a23b2a427","current-leader-member-id":"e2f0763a23b2a427"}
	{"level":"info","ts":"2024-09-20T19:48:50.039448Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.72.238:2380"}
	{"level":"info","ts":"2024-09-20T19:48:50.039827Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.72.238:2380"}
	{"level":"info","ts":"2024-09-20T19:48:50.040129Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-220027","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.238:2380"],"advertise-client-urls":["https://192.168.72.238:2379"]}
	
	
	==> etcd [996b8852b7eed0d44744440ab2da6e724ca4f40b5f6264bcf1943f73b3386af5] <==
	{"level":"info","ts":"2024-09-20T19:48:54.201754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 switched to configuration voters=(16352700239061361703)"}
	{"level":"info","ts":"2024-09-20T19:48:54.204141Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fce591e0af426ce5","local-member-id":"e2f0763a23b2a427","added-peer-id":"e2f0763a23b2a427","added-peer-peer-urls":["https://192.168.72.238:2380"]}
	{"level":"info","ts":"2024-09-20T19:48:54.204328Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fce591e0af426ce5","local-member-id":"e2f0763a23b2a427","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:48:54.204391Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:48:54.216674Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T19:48:54.218237Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.238:2380"}
	{"level":"info","ts":"2024-09-20T19:48:54.218317Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.238:2380"}
	{"level":"info","ts":"2024-09-20T19:48:54.230178Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e2f0763a23b2a427","initial-advertise-peer-urls":["https://192.168.72.238:2380"],"listen-peer-urls":["https://192.168.72.238:2380"],"advertise-client-urls":["https://192.168.72.238:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.238:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T19:48:54.230274Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T19:48:55.320382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-20T19:48:55.320441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-20T19:48:55.320460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 received MsgPreVoteResp from e2f0763a23b2a427 at term 3"}
	{"level":"info","ts":"2024-09-20T19:48:55.320475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 became candidate at term 4"}
	{"level":"info","ts":"2024-09-20T19:48:55.320484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 received MsgVoteResp from e2f0763a23b2a427 at term 4"}
	{"level":"info","ts":"2024-09-20T19:48:55.320495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e2f0763a23b2a427 became leader at term 4"}
	{"level":"info","ts":"2024-09-20T19:48:55.320502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e2f0763a23b2a427 elected leader e2f0763a23b2a427 at term 4"}
	{"level":"info","ts":"2024-09-20T19:48:55.326230Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e2f0763a23b2a427","local-member-attributes":"{Name:kubernetes-upgrade-220027 ClientURLs:[https://192.168.72.238:2379]}","request-path":"/0/members/e2f0763a23b2a427/attributes","cluster-id":"fce591e0af426ce5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T19:48:55.326292Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:48:55.326638Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:48:55.326698Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:48:55.326807Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:48:55.327504Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:48:55.327952Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:48:55.328596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:48:55.328855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.238:2379"}
	
	
	==> kernel <==
	 19:49:01 up 0 min,  0 users,  load average: 1.02, 0.27, 0.09
	Linux kubernetes-upgrade-220027 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2be8abbf2f8d07032c53a8ec40011ab35b2334381232564b71f0a843ec5158f1] <==
	I0920 19:48:48.035964       1 options.go:228] external host was not specified, using 192.168.72.238
	I0920 19:48:48.042269       1 server.go:142] Version: v1.31.1
	I0920 19:48:48.042343       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:48:48.749115       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0920 19:48:48.775259       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 19:48:48.785647       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 19:48:48.788145       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 19:48:48.788451       1 instance.go:232] Using reconciler: lease
	I0920 19:48:49.764841       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	W0920 19:48:49.764887       1 genericapiserver.go:765] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	W0920 19:48:49.793105       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:52916->127.0.0.1:2379: read: connection reset by peer"
	E0920 19:48:49.793614       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0920 19:48:49.793756       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:48:49.796426       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:48:49.796484       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:48:49.796522       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:48:49.796548       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9177a1b92872b0b550acfc8c318b3221173b3ab7e9cfd3b9718050d85ee8d872] <==
	I0920 19:48:56.781976       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 19:48:56.803597       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 19:48:56.807657       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 19:48:56.808927       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 19:48:56.809132       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 19:48:56.809310       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 19:48:56.809350       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 19:48:56.814313       1 aggregator.go:171] initial CRD sync complete...
	I0920 19:48:56.814628       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 19:48:56.814675       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 19:48:56.814707       1 cache.go:39] Caches are synced for autoregister controller
	I0920 19:48:56.817441       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0920 19:48:56.834597       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0920 19:48:56.845695       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 19:48:56.846431       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 19:48:56.846584       1 policy_source.go:224] refreshing policies
	I0920 19:48:56.898538       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 19:48:57.697563       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 19:48:58.638515       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 19:48:58.661810       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 19:48:58.714799       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 19:48:58.810227       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 19:48:58.816674       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 19:48:59.682385       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 19:49:00.532828       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8b2956f056656d9f2a46341052ba0d0590cab073d235551962ae0ef42ddca4c3] <==
	I0920 19:48:49.466865       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [dd131240137d50541f7886e2dc71e4445083fd46fe000da743d9b2f360d1a05d] <==
	I0920 19:49:00.187909       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0920 19:49:00.190222       1 shared_informer.go:320] Caches are synced for crt configmap
	I0920 19:49:00.215480       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0920 19:49:00.221916       1 shared_informer.go:320] Caches are synced for TTL
	I0920 19:49:00.227422       1 shared_informer.go:320] Caches are synced for daemon sets
	I0920 19:49:00.227455       1 shared_informer.go:320] Caches are synced for stateful set
	I0920 19:49:00.227581       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0920 19:49:00.229268       1 shared_informer.go:320] Caches are synced for job
	I0920 19:49:00.234975       1 shared_informer.go:320] Caches are synced for service account
	I0920 19:49:00.244446       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0920 19:49:00.259206       1 shared_informer.go:320] Caches are synced for attach detach
	I0920 19:49:00.264554       1 shared_informer.go:320] Caches are synced for PV protection
	I0920 19:49:00.327104       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0920 19:49:00.327343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.231µs"
	I0920 19:49:00.327191       1 shared_informer.go:320] Caches are synced for persistent volume
	I0920 19:49:00.367298       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0920 19:49:00.367578       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-220027"
	I0920 19:49:00.378201       1 shared_informer.go:320] Caches are synced for disruption
	I0920 19:49:00.378756       1 shared_informer.go:320] Caches are synced for deployment
	I0920 19:49:00.404528       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 19:49:00.410980       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 19:49:00.411044       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0920 19:49:00.812177       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 19:49:00.812214       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 19:49:00.850460       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [0e8f359c235927f1f17995275fc7f7cb3d6ea32841705ce865e9cf69656877ac] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 19:48:58.651392       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 19:48:58.695156       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.238"]
	E0920 19:48:58.697210       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:48:58.755238       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 19:48:58.755287       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 19:48:58.755315       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:48:58.759958       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:48:58.760851       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:48:58.760883       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:48:58.765941       1 config.go:199] "Starting service config controller"
	I0920 19:48:58.767637       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:48:58.767761       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:48:58.768191       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:48:58.768722       1 config.go:328] "Starting node config controller"
	I0920 19:48:58.768752       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:48:58.869098       1 shared_informer.go:320] Caches are synced for node config
	I0920 19:48:58.869157       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:48:58.874175       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [5f8a31aa920b3352f7e597052843374890d3c3fac8c71549cf8b716bc1df76f6] <==
	I0920 19:48:49.178801       1 server_linux.go:66] "Using iptables proxy"
	E0920 19:48:49.529277       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	
	
	==> kube-scheduler [5f3ddba10c015e57d38029972ba39a185984356ce6aa4a2b241c002f5cbd7e12] <==
	I0920 19:48:49.709410       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [f6d992295b5ea0b59d7c0d70643fc2b23c5d7cab237d0b1420990335616d5eb2] <==
	I0920 19:48:54.749494       1 serving.go:386] Generated self-signed cert in-memory
	W0920 19:48:56.733521       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 19:48:56.733607       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 19:48:56.733975       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 19:48:56.734078       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 19:48:56.805637       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 19:48:56.805680       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:48:56.819545       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 19:48:56.819710       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 19:48:56.819737       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 19:48:56.819810       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 19:48:56.920916       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:48:53 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:53.450254    2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e71afed3ddd0cc3ec690ab9d7c29b77-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-220027\" (UID: \"6e71afed3ddd0cc3ec690ab9d7c29b77\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-220027"
	Sep 20 19:48:53 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:53.450281    2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4dd9c0a010d2b4ccc5de66c632dbcdbd-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-220027\" (UID: \"4dd9c0a010d2b4ccc5de66c632dbcdbd\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-220027"
	Sep 20 19:48:53 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:53.571976    2601 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-220027"
	Sep 20 19:48:53 kubernetes-upgrade-220027 kubelet[2601]: E0920 19:48:53.572917    2601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.238:8443: connect: connection refused" node="kubernetes-upgrade-220027"
	Sep 20 19:48:53 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:53.681531    2601 scope.go:117] "RemoveContainer" containerID="84b4925f63a796b9e51e610327646486ded37c64aa3e7e9c2aae65261b4aa1b4"
	Sep 20 19:48:53 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:53.682117    2601 scope.go:117] "RemoveContainer" containerID="2be8abbf2f8d07032c53a8ec40011ab35b2334381232564b71f0a843ec5158f1"
	Sep 20 19:48:53 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:53.683069    2601 scope.go:117] "RemoveContainer" containerID="8b2956f056656d9f2a46341052ba0d0590cab073d235551962ae0ef42ddca4c3"
	Sep 20 19:48:53 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:53.684740    2601 scope.go:117] "RemoveContainer" containerID="5f3ddba10c015e57d38029972ba39a185984356ce6aa4a2b241c002f5cbd7e12"
	Sep 20 19:48:53 kubernetes-upgrade-220027 kubelet[2601]: E0920 19:48:53.821420    2601 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-220027?timeout=10s\": dial tcp 192.168.72.238:8443: connect: connection refused" interval="800ms"
	Sep 20 19:48:53 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:53.973928    2601 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-220027"
	Sep 20 19:48:53 kubernetes-upgrade-220027 kubelet[2601]: E0920 19:48:53.974796    2601 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.238:8443: connect: connection refused" node="kubernetes-upgrade-220027"
	Sep 20 19:48:54 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:54.776152    2601 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-220027"
	Sep 20 19:48:56 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:56.941497    2601 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-220027"
	Sep 20 19:48:56 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:56.941763    2601 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-220027"
	Sep 20 19:48:56 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:56.941879    2601 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 20 19:48:56 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:56.943688    2601 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 20 19:48:57 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:57.188391    2601 apiserver.go:52] "Watching apiserver"
	Sep 20 19:48:57 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:57.229712    2601 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 20 19:48:57 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:57.243212    2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3e66d70-2255-4e0d-b8c2-9d843b06ebf1-lib-modules\") pod \"kube-proxy-5cg5m\" (UID: \"a3e66d70-2255-4e0d-b8c2-9d843b06ebf1\") " pod="kube-system/kube-proxy-5cg5m"
	Sep 20 19:48:57 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:57.243355    2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/28f47320-fdc7-4685-960e-750477c59154-tmp\") pod \"storage-provisioner\" (UID: \"28f47320-fdc7-4685-960e-750477c59154\") " pod="kube-system/storage-provisioner"
	Sep 20 19:48:57 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:57.243415    2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3e66d70-2255-4e0d-b8c2-9d843b06ebf1-xtables-lock\") pod \"kube-proxy-5cg5m\" (UID: \"a3e66d70-2255-4e0d-b8c2-9d843b06ebf1\") " pod="kube-system/kube-proxy-5cg5m"
	Sep 20 19:48:57 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:57.244061    2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94jrl\" (UniqueName: \"kubernetes.io/projected/28f47320-fdc7-4685-960e-750477c59154-kube-api-access-94jrl\") pod \"storage-provisioner\" (UID: \"28f47320-fdc7-4685-960e-750477c59154\") " pod="kube-system/storage-provisioner"
	Sep 20 19:48:57 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:57.364551    2601 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 20 19:48:57 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:57.508416    2601 scope.go:117] "RemoveContainer" containerID="5f8a31aa920b3352f7e597052843374890d3c3fac8c71549cf8b716bc1df76f6"
	Sep 20 19:48:58 kubernetes-upgrade-220027 kubelet[2601]: I0920 19:48:58.514196    2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.514173844 podStartE2EDuration="14.514173844s" podCreationTimestamp="2024-09-20 19:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-20 19:48:58.461586516 +0000 UTC m=+5.387106222" watchObservedRunningTime="2024-09-20 19:48:58.514173844 +0000 UTC m=+5.439693550"
	
	
	==> storage-provisioner [a9ba953ebda88c56df3e15b695e8c2f67404a632991909df19f6fdac70a2f029] <==
	I0920 19:48:58.590475       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:49:00.244774  800476 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19678-739831/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-220027 -n kubernetes-upgrade-220027
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-220027 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-scheduler-kubernetes-upgrade-220027
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-220027 describe pod kube-scheduler-kubernetes-upgrade-220027
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-220027 describe pod kube-scheduler-kubernetes-upgrade-220027: exit status 1 (64.098205ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-scheduler-kubernetes-upgrade-220027" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-220027 describe pod kube-scheduler-kubernetes-upgrade-220027: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-220027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-220027
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-220027: (1.189362636s)
--- FAIL: TestKubernetesUpgrade (388.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (818.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-389954 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0920 19:46:24.179826  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p pause-389954 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: signal: killed (13m37.593280082s)

                                                
                                                
-- stdout --
	* [pause-389954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-389954" primary control-plane node in "pause-389954" cluster
	* Updating the running kvm2 "pause-389954" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:46:11.658183  795968 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:46:11.658347  795968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:46:11.658355  795968 out.go:358] Setting ErrFile to fd 2...
	I0920 19:46:11.658361  795968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:46:11.658611  795968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 19:46:11.659369  795968 out.go:352] Setting JSON to false
	I0920 19:46:11.660731  795968 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12522,"bootTime":1726849050,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:46:11.660875  795968 start.go:139] virtualization: kvm guest
	I0920 19:46:11.663273  795968 out.go:177] * [pause-389954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:46:11.664813  795968 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:46:11.664839  795968 notify.go:220] Checking for updates...
	I0920 19:46:11.667347  795968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:46:11.668697  795968 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 19:46:11.669903  795968 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 19:46:11.671348  795968 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:46:11.672688  795968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:46:11.674728  795968 config.go:182] Loaded profile config "pause-389954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:46:11.675375  795968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:46:11.675461  795968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:46:11.701701  795968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39895
	I0920 19:46:11.702268  795968 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:46:11.702995  795968 main.go:141] libmachine: Using API Version  1
	I0920 19:46:11.703022  795968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:46:11.703464  795968 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:46:11.703712  795968 main.go:141] libmachine: (pause-389954) Calling .DriverName
	I0920 19:46:11.703999  795968 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:46:11.704464  795968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:46:11.704543  795968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:46:11.725670  795968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0920 19:46:11.726417  795968 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:46:11.727064  795968 main.go:141] libmachine: Using API Version  1
	I0920 19:46:11.727084  795968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:46:11.727680  795968 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:46:11.727863  795968 main.go:141] libmachine: (pause-389954) Calling .DriverName
	I0920 19:46:11.767600  795968 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:46:11.769215  795968 start.go:297] selected driver: kvm2
	I0920 19:46:11.769239  795968 start.go:901] validating driver "kvm2" against &{Name:pause-389954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-389954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:46:11.769439  795968 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:46:11.769900  795968 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:46:11.770030  795968 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:46:11.787400  795968 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:46:11.788518  795968 cni.go:84] Creating CNI manager for ""
	I0920 19:46:11.788598  795968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:46:11.788666  795968 start.go:340] cluster config:
	{Name:pause-389954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-389954 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alia
ses:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:46:11.788857  795968 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:46:11.790689  795968 out.go:177] * Starting "pause-389954" primary control-plane node in "pause-389954" cluster
	I0920 19:46:11.791818  795968 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:46:11.791865  795968 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 19:46:11.791878  795968 cache.go:56] Caching tarball of preloaded images
	I0920 19:46:11.791973  795968 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:46:11.791985  795968 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:46:11.792166  795968 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/pause-389954/config.json ...
	I0920 19:46:11.792421  795968 start.go:360] acquireMachinesLock for pause-389954: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:46:23.748022  795968 start.go:364] duration metric: took 11.955571743s to acquireMachinesLock for "pause-389954"
	I0920 19:46:23.748103  795968 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:46:23.748114  795968 fix.go:54] fixHost starting: 
	I0920 19:46:23.748508  795968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:46:23.748573  795968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:46:23.768915  795968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0920 19:46:23.769342  795968 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:46:23.769949  795968 main.go:141] libmachine: Using API Version  1
	I0920 19:46:23.769978  795968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:46:23.770327  795968 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:46:23.770601  795968 main.go:141] libmachine: (pause-389954) Calling .DriverName
	I0920 19:46:23.770780  795968 main.go:141] libmachine: (pause-389954) Calling .GetState
	I0920 19:46:23.772403  795968 fix.go:112] recreateIfNeeded on pause-389954: state=Running err=<nil>
	W0920 19:46:23.772427  795968 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:46:23.774431  795968 out.go:177] * Updating the running kvm2 "pause-389954" VM ...
	I0920 19:46:23.775759  795968 machine.go:93] provisionDockerMachine start ...
	I0920 19:46:23.775785  795968 main.go:141] libmachine: (pause-389954) Calling .DriverName
	I0920 19:46:23.775982  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHHostname
	I0920 19:46:23.779043  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:23.779484  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:46:23.779510  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:23.779760  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHPort
	I0920 19:46:23.779926  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:23.780073  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:23.780198  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHUsername
	I0920 19:46:23.780378  795968 main.go:141] libmachine: Using SSH client type: native
	I0920 19:46:23.780620  795968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 19:46:23.780635  795968 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:46:23.887681  795968 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-389954
	
	I0920 19:46:23.887723  795968 main.go:141] libmachine: (pause-389954) Calling .GetMachineName
	I0920 19:46:23.887984  795968 buildroot.go:166] provisioning hostname "pause-389954"
	I0920 19:46:23.888015  795968 main.go:141] libmachine: (pause-389954) Calling .GetMachineName
	I0920 19:46:23.888224  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHHostname
	I0920 19:46:23.891394  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:23.891791  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:46:23.891829  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:23.892003  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHPort
	I0920 19:46:23.892169  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:23.892324  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:23.892455  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHUsername
	I0920 19:46:23.892659  795968 main.go:141] libmachine: Using SSH client type: native
	I0920 19:46:23.892902  795968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 19:46:23.892920  795968 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-389954 && echo "pause-389954" | sudo tee /etc/hostname
	I0920 19:46:24.016321  795968 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-389954
	
	I0920 19:46:24.016356  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHHostname
	I0920 19:46:24.019851  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:24.020300  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:46:24.020349  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:24.020680  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHPort
	I0920 19:46:24.020902  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:24.021137  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:24.021290  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHUsername
	I0920 19:46:24.021492  795968 main.go:141] libmachine: Using SSH client type: native
	I0920 19:46:24.021675  795968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 19:46:24.021693  795968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-389954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-389954/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-389954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:46:24.128622  795968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:46:24.128660  795968 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
	I0920 19:46:24.128711  795968 buildroot.go:174] setting up certificates
	I0920 19:46:24.128726  795968 provision.go:84] configureAuth start
	I0920 19:46:24.128742  795968 main.go:141] libmachine: (pause-389954) Calling .GetMachineName
	I0920 19:46:24.129068  795968 main.go:141] libmachine: (pause-389954) Calling .GetIP
	I0920 19:46:24.132549  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:24.132950  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:46:24.133006  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:24.133289  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHHostname
	I0920 19:46:24.136146  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:24.136628  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:46:24.136662  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:24.136803  795968 provision.go:143] copyHostCerts
	I0920 19:46:24.136874  795968 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem, removing ...
	I0920 19:46:24.136889  795968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem
	I0920 19:46:24.136958  795968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
	I0920 19:46:24.137115  795968 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem, removing ...
	I0920 19:46:24.137129  795968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem
	I0920 19:46:24.137163  795968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
	I0920 19:46:24.137258  795968 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem, removing ...
	I0920 19:46:24.137268  795968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem
	I0920 19:46:24.137296  795968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
	I0920 19:46:24.137446  795968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.pause-389954 san=[127.0.0.1 192.168.39.60 localhost minikube pause-389954]
	I0920 19:46:24.277501  795968 provision.go:177] copyRemoteCerts
	I0920 19:46:24.277574  795968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:46:24.277601  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHHostname
	I0920 19:46:24.280541  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:24.280973  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:46:24.280999  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:24.281132  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHPort
	I0920 19:46:24.281317  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:24.281468  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHUsername
	I0920 19:46:24.281667  795968 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/pause-389954/id_rsa Username:docker}
	I0920 19:46:24.369910  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:46:24.397868  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0920 19:46:24.423864  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 19:46:24.459534  795968 provision.go:87] duration metric: took 330.789435ms to configureAuth
	I0920 19:46:24.459571  795968 buildroot.go:189] setting minikube options for container-runtime
	I0920 19:46:24.459807  795968 config.go:182] Loaded profile config "pause-389954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:46:24.459889  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHHostname
	I0920 19:46:24.462923  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:24.463321  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:46:24.463352  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:24.463527  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHPort
	I0920 19:46:24.463695  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:24.463814  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:24.463989  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHUsername
	I0920 19:46:24.464165  795968 main.go:141] libmachine: Using SSH client type: native
	I0920 19:46:24.464359  795968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 19:46:24.464380  795968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:46:32.476368  795968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:46:32.476398  795968 machine.go:96] duration metric: took 8.70062057s to provisionDockerMachine
	I0920 19:46:32.476412  795968 start.go:293] postStartSetup for "pause-389954" (driver="kvm2")
	I0920 19:46:32.476426  795968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:46:32.476448  795968 main.go:141] libmachine: (pause-389954) Calling .DriverName
	I0920 19:46:32.476783  795968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:46:32.476835  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHHostname
	I0920 19:46:32.480166  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:32.480537  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:46:32.480579  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:32.480709  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHPort
	I0920 19:46:32.480917  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:32.481100  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHUsername
	I0920 19:46:32.481239  795968 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/pause-389954/id_rsa Username:docker}
	I0920 19:46:32.570365  795968 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:46:32.574962  795968 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 19:46:32.574995  795968 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
	I0920 19:46:32.575064  795968 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
	I0920 19:46:32.575171  795968 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem -> 7484972.pem in /etc/ssl/certs
	I0920 19:46:32.575292  795968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:46:32.585287  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 19:46:32.611163  795968 start.go:296] duration metric: took 134.732923ms for postStartSetup
	I0920 19:46:32.611218  795968 fix.go:56] duration metric: took 8.863103691s for fixHost
	I0920 19:46:32.611245  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHHostname
	I0920 19:46:32.613987  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:32.614307  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:46:32.614340  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:32.614552  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHPort
	I0920 19:46:32.614787  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:32.614960  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:32.615125  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHUsername
	I0920 19:46:32.615363  795968 main.go:141] libmachine: Using SSH client type: native
	I0920 19:46:32.615581  795968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0920 19:46:32.615593  795968 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 19:46:32.727773  795968 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726861592.717311111
	
	I0920 19:46:32.727800  795968 fix.go:216] guest clock: 1726861592.717311111
	I0920 19:46:32.727809  795968 fix.go:229] Guest: 2024-09-20 19:46:32.717311111 +0000 UTC Remote: 2024-09-20 19:46:32.611222596 +0000 UTC m=+21.000663253 (delta=106.088515ms)
	I0920 19:46:32.727837  795968 fix.go:200] guest clock delta is within tolerance: 106.088515ms
	I0920 19:46:32.727844  795968 start.go:83] releasing machines lock for "pause-389954", held for 8.979790111s
	I0920 19:46:32.727881  795968 main.go:141] libmachine: (pause-389954) Calling .DriverName
	I0920 19:46:32.728170  795968 main.go:141] libmachine: (pause-389954) Calling .GetIP
	I0920 19:46:32.731384  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:32.731796  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:46:32.731819  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:32.732041  795968 main.go:141] libmachine: (pause-389954) Calling .DriverName
	I0920 19:46:32.732587  795968 main.go:141] libmachine: (pause-389954) Calling .DriverName
	I0920 19:46:32.732785  795968 main.go:141] libmachine: (pause-389954) Calling .DriverName
	I0920 19:46:32.732894  795968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:46:32.732942  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHHostname
	I0920 19:46:32.733041  795968 ssh_runner.go:195] Run: cat /version.json
	I0920 19:46:32.733085  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHHostname
	I0920 19:46:32.735797  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:32.736206  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:32.736353  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:46:32.736390  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:32.736527  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHPort
	I0920 19:46:32.736690  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:46:32.736710  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:46:32.736718  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:32.736875  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHPort
	I0920 19:46:32.736884  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHUsername
	I0920 19:46:32.737099  795968 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/pause-389954/id_rsa Username:docker}
	I0920 19:46:32.737123  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHKeyPath
	I0920 19:46:32.737270  795968 main.go:141] libmachine: (pause-389954) Calling .GetSSHUsername
	I0920 19:46:32.737425  795968 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/pause-389954/id_rsa Username:docker}
	I0920 19:46:32.854145  795968 ssh_runner.go:195] Run: systemctl --version
	I0920 19:46:32.923241  795968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:46:33.320569  795968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 19:46:33.348346  795968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 19:46:33.348414  795968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:46:33.370565  795968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 19:46:33.370596  795968 start.go:495] detecting cgroup driver to use...
	I0920 19:46:33.370673  795968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:46:33.409908  795968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:46:33.496490  795968 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:46:33.496587  795968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:46:33.578412  795968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:46:33.640630  795968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:46:33.870913  795968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:46:34.075471  795968 docker.go:233] disabling docker service ...
	I0920 19:46:34.075532  795968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:46:34.095472  795968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:46:34.112829  795968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:46:34.316952  795968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:46:34.524359  795968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:46:34.540475  795968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:46:34.561816  795968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:46:34.561912  795968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:46:34.577925  795968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:46:34.578031  795968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:46:34.593897  795968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:46:34.606694  795968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:46:34.620942  795968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:46:34.633902  795968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:46:34.646991  795968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:46:34.665262  795968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:46:34.684291  795968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:46:34.695720  795968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:46:34.705900  795968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:46:34.886758  795968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:48:05.204715  795968 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.317911161s)
	I0920 19:48:05.204751  795968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:48:05.204814  795968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:48:05.210940  795968 start.go:563] Will wait 60s for crictl version
	I0920 19:48:05.211015  795968 ssh_runner.go:195] Run: which crictl
	I0920 19:48:05.215241  795968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:48:05.260359  795968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 19:48:05.260450  795968 ssh_runner.go:195] Run: crio --version
	I0920 19:48:05.300768  795968 ssh_runner.go:195] Run: crio --version
	I0920 19:48:05.337712  795968 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 19:48:05.339095  795968 main.go:141] libmachine: (pause-389954) Calling .GetIP
	I0920 19:48:05.342444  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:48:05.342823  795968 main.go:141] libmachine: (pause-389954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:2e:46", ip: ""} in network mk-pause-389954: {Iface:virbr1 ExpiryTime:2024-09-20 20:45:04 +0000 UTC Type:0 Mac:52:54:00:58:2e:46 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:pause-389954 Clientid:01:52:54:00:58:2e:46}
	I0920 19:48:05.342863  795968 main.go:141] libmachine: (pause-389954) DBG | domain pause-389954 has defined IP address 192.168.39.60 and MAC address 52:54:00:58:2e:46 in network mk-pause-389954
	I0920 19:48:05.343131  795968 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 19:48:05.348049  795968 kubeadm.go:883] updating cluster {Name:pause-389954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-389954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-sec
urity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:48:05.348172  795968 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:48:05.348213  795968 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:48:05.393126  795968 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:48:05.393158  795968 crio.go:433] Images already preloaded, skipping extraction
	I0920 19:48:05.393220  795968 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:48:05.447860  795968 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:48:05.447888  795968 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:48:05.447899  795968 kubeadm.go:934] updating node { 192.168.39.60 8443 v1.31.1 crio true true} ...
	I0920 19:48:05.448048  795968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-389954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-389954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:48:05.448142  795968 ssh_runner.go:195] Run: crio config
	I0920 19:48:05.503087  795968 cni.go:84] Creating CNI manager for ""
	I0920 19:48:05.503118  795968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:48:05.503130  795968 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:48:05.503165  795968 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-389954 NodeName:pause-389954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:48:05.503390  795968 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-389954"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:48:05.503472  795968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:48:05.514111  795968 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:48:05.514185  795968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:48:05.524724  795968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0920 19:48:05.544070  795968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:48:05.566294  795968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 19:48:05.587357  795968 ssh_runner.go:195] Run: grep 192.168.39.60	control-plane.minikube.internal$ /etc/hosts
	I0920 19:48:05.592652  795968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:48:05.754279  795968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:48:05.769680  795968 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/pause-389954 for IP: 192.168.39.60
	I0920 19:48:05.769705  795968 certs.go:194] generating shared ca certs ...
	I0920 19:48:05.769725  795968 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:48:05.769906  795968 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
	I0920 19:48:05.769965  795968 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
	I0920 19:48:05.769978  795968 certs.go:256] generating profile certs ...
	I0920 19:48:05.770111  795968 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/pause-389954/client.key
	I0920 19:48:05.770212  795968 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/pause-389954/apiserver.key.6b691640
	I0920 19:48:05.770266  795968 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/pause-389954/proxy-client.key
	I0920 19:48:05.770420  795968 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem (1338 bytes)
	W0920 19:48:05.770460  795968 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497_empty.pem, impossibly tiny 0 bytes
	I0920 19:48:05.770473  795968 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:48:05.770503  795968 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:48:05.770538  795968 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:48:05.770584  795968 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
	I0920 19:48:05.770640  795968 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem (1708 bytes)
	I0920 19:48:05.771644  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:48:05.798435  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 19:48:05.826713  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:48:05.854404  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:48:05.877964  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/pause-389954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 19:48:05.901976  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/pause-389954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:48:05.934751  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/pause-389954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:48:05.962114  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/pause-389954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:48:05.988391  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:48:06.013991  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/748497.pem --> /usr/share/ca-certificates/748497.pem (1338 bytes)
	I0920 19:48:06.041745  795968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/ssl/certs/7484972.pem --> /usr/share/ca-certificates/7484972.pem (1708 bytes)
	I0920 19:48:06.071581  795968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:48:06.088923  795968 ssh_runner.go:195] Run: openssl version
	I0920 19:48:06.095707  795968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:48:06.107419  795968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:48:06.112243  795968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:48:06.112312  795968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:48:06.118144  795968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:48:06.128613  795968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/748497.pem && ln -fs /usr/share/ca-certificates/748497.pem /etc/ssl/certs/748497.pem"
	I0920 19:48:06.150986  795968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/748497.pem
	I0920 19:48:06.159813  795968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:33 /usr/share/ca-certificates/748497.pem
	I0920 19:48:06.159893  795968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/748497.pem
	I0920 19:48:06.182364  795968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/748497.pem /etc/ssl/certs/51391683.0"
	I0920 19:48:06.195784  795968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7484972.pem && ln -fs /usr/share/ca-certificates/7484972.pem /etc/ssl/certs/7484972.pem"
	I0920 19:48:06.208347  795968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7484972.pem
	I0920 19:48:06.220173  795968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:33 /usr/share/ca-certificates/7484972.pem
	I0920 19:48:06.220237  795968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7484972.pem
	I0920 19:48:06.237150  795968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7484972.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:48:06.248719  795968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:48:06.255177  795968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:48:06.261664  795968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:48:06.267441  795968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:48:06.273311  795968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:48:06.280639  795968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:48:06.286608  795968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:48:06.292576  795968 kubeadm.go:392] StartCluster: {Name:pause-389954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-389954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securi
ty-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:48:06.292685  795968 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:48:06.292738  795968 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:48:06.335004  795968 cri.go:89] found id: "6663df54cb293ef15879547432748299a8cc3139830f6842aede321899c17ac1"
	I0920 19:48:06.335033  795968 cri.go:89] found id: "31eb799ea04f384d8be1822e7898bfd701f19dbb14c078aad6e25afbc557094f"
	I0920 19:48:06.335038  795968 cri.go:89] found id: "29a304544473c2a3a7e625c06f40da669d812bcc2e089dcbd3fe907a2e4409a0"
	I0920 19:48:06.335043  795968 cri.go:89] found id: "e797ede4c36056a074646de4272585a36196290b22d3688e26b3fc4a5f8380e6"
	I0920 19:48:06.335047  795968 cri.go:89] found id: "c9d31d8f69a1f2ca5768287a2f9c1f35d50b9b8f1bbe6c60c8b01d9b08048017"
	I0920 19:48:06.335052  795968 cri.go:89] found id: "51d11998e1da3514c645c771ed9cc513da15a6fc9758286fe54d55bdb7686fb5"
	I0920 19:48:06.335056  795968 cri.go:89] found id: "ea5fdf56cdf54e2feded7e97fc3634384f1c3d250d7cc2a3ad9150eacba7ff11"
	I0920 19:48:06.335060  795968 cri.go:89] found id: "a0e9b0c5c0d424f53d293e64d1098ad4a99aa5acd90f39a28e4889cbcb183710"
	I0920 19:48:06.335064  795968 cri.go:89] found id: "c248de837c0196eaf06b581b2346c26079a0f226ad482b1bec8a4af8dbb93f20"
	I0920 19:48:06.335083  795968 cri.go:89] found id: "36aefb3b1de65586b8a88c16c512eb24738570b5a7ada7b7cd5201dc55c402d4"
	I0920 19:48:06.335091  795968 cri.go:89] found id: ""
	I0920 19:48:06.335147  795968 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-linux-amd64 start -p pause-389954 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-389954 -n pause-389954
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-389954 -n pause-389954: exit status 2 (225.215339ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-389954 logs -n 25
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-010370 sudo cat                              | bridge-010370          | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC | 20 Sep 24 19:51 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p bridge-010370 sudo                                  | bridge-010370          | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC | 20 Sep 24 19:51 UTC |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p bridge-010370 sudo                                  | bridge-010370          | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-010370 sudo                                  | bridge-010370          | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC | 20 Sep 24 19:51 UTC |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-010370 sudo cat                              | bridge-010370          | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC | 20 Sep 24 19:51 UTC |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-010370 sudo cat                              | bridge-010370          | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC | 20 Sep 24 19:51 UTC |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-010370 sudo                                  | bridge-010370          | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC | 20 Sep 24 19:51 UTC |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-010370 sudo                                  | bridge-010370          | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC | 20 Sep 24 19:51 UTC |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-010370 sudo                                  | bridge-010370          | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC | 20 Sep 24 19:51 UTC |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-010370 sudo find                             | bridge-010370          | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC | 20 Sep 24 19:51 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-010370 sudo crio                             | bridge-010370          | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC | 20 Sep 24 19:51 UTC |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-010370                                       | bridge-010370          | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC | 20 Sep 24 19:51 UTC |
	| start   | -p embed-certs-983417                                  | embed-certs-983417     | jenkins | v1.34.0 | 20 Sep 24 19:51 UTC | 20 Sep 24 19:52 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-983417            | embed-certs-983417     | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-983417                                  | embed-certs-983417     | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-172076             | no-preload-172076      | jenkins | v1.34.0 | 20 Sep 24 19:53 UTC | 20 Sep 24 19:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-172076                                   | no-preload-172076      | jenkins | v1.34.0 | 20 Sep 24 19:53 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-983417                 | embed-certs-983417     | jenkins | v1.34.0 | 20 Sep 24 19:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-100079        | old-k8s-version-100079 | jenkins | v1.34.0 | 20 Sep 24 19:55 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| start   | -p embed-certs-983417                                  | embed-certs-983417     | jenkins | v1.34.0 | 20 Sep 24 19:55 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-172076                  | no-preload-172076      | jenkins | v1.34.0 | 20 Sep 24 19:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-172076                                   | no-preload-172076      | jenkins | v1.34.0 | 20 Sep 24 19:55 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-100079                              | old-k8s-version-100079 | jenkins | v1.34.0 | 20 Sep 24 19:57 UTC | 20 Sep 24 19:57 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-100079             | old-k8s-version-100079 | jenkins | v1.34.0 | 20 Sep 24 19:57 UTC | 20 Sep 24 19:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-100079                              | old-k8s-version-100079 | jenkins | v1.34.0 | 20 Sep 24 19:57 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=kvm2                                          |                        |         |         |                     |                     |
	|         | --container-runtime=crio                               |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:57:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:57:15.155498  811275 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:57:15.155598  811275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:57:15.155604  811275 out.go:358] Setting ErrFile to fd 2...
	I0920 19:57:15.155607  811275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:57:15.155760  811275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 19:57:15.156305  811275 out.go:352] Setting JSON to false
	I0920 19:57:15.157306  811275 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13185,"bootTime":1726849050,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:57:15.157406  811275 start.go:139] virtualization: kvm guest
	I0920 19:57:15.159795  811275 out.go:177] * [old-k8s-version-100079] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:57:15.161073  811275 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:57:15.161078  811275 notify.go:220] Checking for updates...
	I0920 19:57:15.163698  811275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:57:15.165058  811275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 19:57:15.166402  811275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 19:57:15.167779  811275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:57:15.169293  811275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:57:15.170980  811275 config.go:182] Loaded profile config "old-k8s-version-100079": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 19:57:15.171366  811275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:57:15.171413  811275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:57:15.186428  811275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0920 19:57:15.186883  811275 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:57:15.187413  811275 main.go:141] libmachine: Using API Version  1
	I0920 19:57:15.187446  811275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:57:15.187834  811275 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:57:15.187997  811275 main.go:141] libmachine: (old-k8s-version-100079) Calling .DriverName
	I0920 19:57:15.189819  811275 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 19:57:15.190994  811275 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:57:15.191278  811275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:57:15.191311  811275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:57:15.206313  811275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I0920 19:57:15.206779  811275 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:57:15.207344  811275 main.go:141] libmachine: Using API Version  1
	I0920 19:57:15.207369  811275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:57:15.207725  811275 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:57:15.207870  811275 main.go:141] libmachine: (old-k8s-version-100079) Calling .DriverName
	I0920 19:57:15.243014  811275 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 19:57:15.244209  811275 start.go:297] selected driver: kvm2
	I0920 19:57:15.244220  811275 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-100079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-100079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:57:15.244338  811275 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:57:15.245117  811275 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:57:15.245217  811275 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 19:57:15.260343  811275 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 19:57:15.260814  811275 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:57:15.260847  811275 cni.go:84] Creating CNI manager for ""
	I0920 19:57:15.260907  811275 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 19:57:15.260949  811275 start.go:340] cluster config:
	{Name:old-k8s-version-100079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-100079 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:57:15.261096  811275 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:57:15.262967  811275 out.go:177] * Starting "old-k8s-version-100079" primary control-plane node in "old-k8s-version-100079" cluster
	I0920 19:57:15.264134  811275 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:57:15.264189  811275 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 19:57:15.264200  811275 cache.go:56] Caching tarball of preloaded images
	I0920 19:57:15.264277  811275 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 19:57:15.264290  811275 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 19:57:15.264401  811275 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/old-k8s-version-100079/config.json ...
	I0920 19:57:15.264615  811275 start.go:360] acquireMachinesLock for old-k8s-version-100079: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 19:57:18.959111  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:57:22.031182  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:57:28.111122  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:57:31.183162  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:57:37.263139  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:57:40.335244  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:57:46.415122  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:57:49.487181  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:57:55.567111  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:57:58.639181  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:04.719139  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:07.791111  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:13.871104  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:16.943144  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:23.023132  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:26.095073  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:32.175109  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:35.247162  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:41.327074  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:44.399108  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:50.479111  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:53.551140  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:58:59.631088  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:59:02.703156  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:59:08.783095  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:59:11.855141  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:59:17.935111  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:59:21.007200  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:59:27.087262  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:59:30.159141  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:59:36.239145  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:59:39.311154  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	I0920 19:59:45.391149  810557 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.160:22: connect: no route to host
	
	
	==> CRI-O <==
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.718536036Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862389718510938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62e8a49c-86c7-44bf-af34-f07a759db5a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.719066569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=749f5710-82d0-4d2f-8f49-b85c879a1e50 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.719117165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=749f5710-82d0-4d2f-8f49-b85c879a1e50 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.719192328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90472bad1ce3ed618f2e2ae9693fddd8a6d50eeeef9f9a54e7878e9fe7d6ef43,PodSandboxId:19a99127d74467c63e2c7d99c4421140525d91b65944b4803814097b4b6ae300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:16,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726862345642004696,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-389954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1feaa20bf576398e4137794021009f8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 16,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68868385af620930f87766b6e33490822066eb70b3fab4ad52bc57c2479c40e6,PodSandboxId:67be7c46a142f8ef5e3653101e881d7bd1f8e80dbb21f0ab0d931cebf081d4be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726862179312148145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-389954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc1f20a87d41aebdec02f9927aff286,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=749f5710-82d0-4d2f-8f49-b85c879a1e50 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.754881592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=edbaf5ba-ac13-4278-819e-4110574e03ec name=/runtime.v1.RuntimeService/Version
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.754953933Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=edbaf5ba-ac13-4278-819e-4110574e03ec name=/runtime.v1.RuntimeService/Version
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.756293157Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eaf64125-1efb-4e58-ace5-aaf3e88c53d5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.756660419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862389756638643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eaf64125-1efb-4e58-ace5-aaf3e88c53d5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.757480815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=261a888e-9b17-45d3-988a-c42dc5296061 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.757529072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=261a888e-9b17-45d3-988a-c42dc5296061 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.757607423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90472bad1ce3ed618f2e2ae9693fddd8a6d50eeeef9f9a54e7878e9fe7d6ef43,PodSandboxId:19a99127d74467c63e2c7d99c4421140525d91b65944b4803814097b4b6ae300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:16,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726862345642004696,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-389954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1feaa20bf576398e4137794021009f8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 16,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68868385af620930f87766b6e33490822066eb70b3fab4ad52bc57c2479c40e6,PodSandboxId:67be7c46a142f8ef5e3653101e881d7bd1f8e80dbb21f0ab0d931cebf081d4be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726862179312148145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-389954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc1f20a87d41aebdec02f9927aff286,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=261a888e-9b17-45d3-988a-c42dc5296061 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.790389516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c19ce64e-b5f4-4e2b-a70a-b89af48faa85 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.790460188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c19ce64e-b5f4-4e2b-a70a-b89af48faa85 name=/runtime.v1.RuntimeService/Version
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.791450311Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b06cffdd-2f2f-4848-95ce-2a3aa8f08806 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.791900034Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862389791875102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b06cffdd-2f2f-4848-95ce-2a3aa8f08806 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.792478658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5fe2342-82df-4d51-9570-15d8fa1e27bf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.792527078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5fe2342-82df-4d51-9570-15d8fa1e27bf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.792606106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90472bad1ce3ed618f2e2ae9693fddd8a6d50eeeef9f9a54e7878e9fe7d6ef43,PodSandboxId:19a99127d74467c63e2c7d99c4421140525d91b65944b4803814097b4b6ae300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:16,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726862345642004696,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-389954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1feaa20bf576398e4137794021009f8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 16,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68868385af620930f87766b6e33490822066eb70b3fab4ad52bc57c2479c40e6,PodSandboxId:67be7c46a142f8ef5e3653101e881d7bd1f8e80dbb21f0ab0d931cebf081d4be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726862179312148145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-389954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc1f20a87d41aebdec02f9927aff286,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5fe2342-82df-4d51-9570-15d8fa1e27bf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.823847053Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e32ca315-e029-454c-8d9b-69bd0b4d150c name=/runtime.v1.RuntimeService/Version
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.823931360Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e32ca315-e029-454c-8d9b-69bd0b4d150c name=/runtime.v1.RuntimeService/Version
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.825411511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=259bee13-a9d7-4fa8-a27d-43d870bb49d6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.825854124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862389825831837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=259bee13-a9d7-4fa8-a27d-43d870bb49d6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.826387545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79031f72-7a4a-4224-b81c-d4155bf2e844 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.826434516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79031f72-7a4a-4224-b81c-d4155bf2e844 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 19:59:49 pause-389954 crio[2926]: time="2024-09-20 19:59:49.826525758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90472bad1ce3ed618f2e2ae9693fddd8a6d50eeeef9f9a54e7878e9fe7d6ef43,PodSandboxId:19a99127d74467c63e2c7d99c4421140525d91b65944b4803814097b4b6ae300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:16,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726862345642004696,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-389954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1feaa20bf576398e4137794021009f8,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 16,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68868385af620930f87766b6e33490822066eb70b3fab4ad52bc57c2479c40e6,PodSandboxId:67be7c46a142f8ef5e3653101e881d7bd1f8e80dbb21f0ab0d931cebf081d4be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726862179312148145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-389954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc1f20a87d41aebdec02f9927aff286,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79031f72-7a4a-4224-b81c-d4155bf2e844 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	90472bad1ce3e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   44 seconds ago      Exited              kube-apiserver      16                  19a99127d7446       kube-apiserver-pause-389954
	68868385af620       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   3 minutes ago       Running             kube-scheduler      4                   67be7c46a142f       kube-scheduler-pause-389954
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.125299] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.301253] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +4.307018] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +0.063763] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.715971] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.541896] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.544671] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.093821] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.183357] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.225840] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[ +11.551376] kauditd_printk_skb: 89 callbacks suppressed
	[Sep20 19:46] systemd-fstab-generator[2592]: Ignoring "noauto" option for root device
	[  +0.205679] systemd-fstab-generator[2672]: Ignoring "noauto" option for root device
	[  +0.234079] systemd-fstab-generator[2696]: Ignoring "noauto" option for root device
	[  +0.215663] systemd-fstab-generator[2741]: Ignoring "noauto" option for root device
	[  +0.376477] systemd-fstab-generator[2791]: Ignoring "noauto" option for root device
	[Sep20 19:48] systemd-fstab-generator[3036]: Ignoring "noauto" option for root device
	[  +0.095789] kauditd_printk_skb: 175 callbacks suppressed
	[  +2.967523] systemd-fstab-generator[3474]: Ignoring "noauto" option for root device
	[ +19.614563] kauditd_printk_skb: 92 callbacks suppressed
	[Sep20 19:50] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 19:52] systemd-fstab-generator[7839]: Ignoring "noauto" option for root device
	[ +23.899803] kauditd_printk_skb: 58 callbacks suppressed
	[Sep20 19:56] systemd-fstab-generator[8528]: Ignoring "noauto" option for root device
	[ +22.509672] kauditd_printk_skb: 48 callbacks suppressed
	
	
	==> kernel <==
	 19:59:50 up 14 min,  0 users,  load average: 0.04, 0.12, 0.12
	Linux pause-389954 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [90472bad1ce3ed618f2e2ae9693fddd8a6d50eeeef9f9a54e7878e9fe7d6ef43] <==
	I0920 19:59:05.799611       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:59:06.312641       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0920 19:59:06.314160       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:06.314295       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0920 19:59:06.317325       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 19:59:06.320990       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 19:59:06.321025       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 19:59:06.321295       1 instance.go:232] Using reconciler: lease
	W0920 19:59:06.322327       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:07.314881       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:07.314880       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:07.323106       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:08.734875       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:08.837624       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:08.859242       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:10.923223       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:10.980957       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:11.916760       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:14.828795       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:15.289812       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:15.586103       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:21.882867       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:22.600957       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 19:59:22.816985       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0920 19:59:26.322741       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-scheduler [68868385af620930f87766b6e33490822066eb70b3fab4ad52bc57c2479c40e6] <==
	E0920 19:59:22.923569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.60:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0920 19:59:23.297911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0920 19:59:23.298006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.60:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0920 19:59:24.405309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.60:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0920 19:59:24.405359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.60:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0920 19:59:25.681978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0920 19:59:25.682069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.60:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0920 19:59:26.762404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.60:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 19:59:26.762466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.60:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:59:27.327561       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.60:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.60:53082->192.168.39.60:8443: read: connection reset by peer
	W0920 19:59:27.327633       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.60:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.60:53066->192.168.39.60:8443: read: connection reset by peer
	E0920 19:59:27.327646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.60:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.60:53082->192.168.39.60:8443: read: connection reset by peer" logger="UnhandledError"
	E0920 19:59:27.327678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.60:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.60:53066->192.168.39.60:8443: read: connection reset by peer" logger="UnhandledError"
	W0920 19:59:27.327834       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.60:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.60:53044->192.168.39.60:8443: read: connection reset by peer
	W0920 19:59:27.327856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.60:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.60:53110->192.168.39.60:8443: read: connection reset by peer
	E0920 19:59:27.327865       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.60:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.60:53044->192.168.39.60:8443: read: connection reset by peer" logger="UnhandledError"
	E0920 19:59:27.327892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.60:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.60:53110->192.168.39.60:8443: read: connection reset by peer" logger="UnhandledError"
	W0920 19:59:35.804850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.60:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 19:59:35.804947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.60:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:59:41.552148       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.60:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 19:59:41.552240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.60:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:59:42.563563       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.60:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 19:59:42.563672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.60:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	W0920 19:59:48.207253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.60:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	E0920 19:59:48.207349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.60:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 20 19:59:35 pause-389954 kubelet[8535]: E0920 19:59:35.270634    8535 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-389954?timeout=10s\": dial tcp 192.168.39.60:8443: connect: connection refused" interval="7s"
	Sep 20 19:59:36 pause-389954 kubelet[8535]: E0920 19:59:36.632144    8535 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-pause-389954_kube-system_f73210882fbc57e566a046405557eb86_1\" is already in use by a1d621a047a2cc6d31b50c74bf9f983d1327985a74acbfb2d29c696b062946d2. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="576c1a78a5e973d3b83adeec87dd0a79323d0b54b36af612e281e8cda072b060"
	Sep 20 19:59:36 pause-389954 kubelet[8535]: E0920 19:59:36.632303    8535 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.31.1,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-se
rvice-account-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly
:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-pause-389954_ku
be-system(f73210882fbc57e566a046405557eb86): CreateContainerError: the container name \"k8s_kube-controller-manager_kube-controller-manager-pause-389954_kube-system_f73210882fbc57e566a046405557eb86_1\" is already in use by a1d621a047a2cc6d31b50c74bf9f983d1327985a74acbfb2d29c696b062946d2. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Sep 20 19:59:36 pause-389954 kubelet[8535]: E0920 19:59:36.633509    8535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-pause-389954_kube-system_f73210882fbc57e566a046405557eb86_1\\\" is already in use by a1d621a047a2cc6d31b50c74bf9f983d1327985a74acbfb2d29c696b062946d2. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-pause-389954" podUID="f73210882fbc57e566a046405557eb86"
	Sep 20 19:59:37 pause-389954 kubelet[8535]: I0920 19:59:37.003608    8535 scope.go:117] "RemoveContainer" containerID="90472bad1ce3ed618f2e2ae9693fddd8a6d50eeeef9f9a54e7878e9fe7d6ef43"
	Sep 20 19:59:37 pause-389954 kubelet[8535]: E0920 19:59:37.003795    8535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-pause-389954_kube-system(e1feaa20bf576398e4137794021009f8)\"" pod="kube-system/kube-apiserver-pause-389954" podUID="e1feaa20bf576398e4137794021009f8"
	Sep 20 19:59:38 pause-389954 kubelet[8535]: E0920 19:59:38.490250    8535 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.60:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-389954.17f70c008da2a10b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-389954,UID:pause-389954,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node pause-389954 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:pause-389954,},FirstTimestamp:2024-09-20 19:56:18.635653387 +0000 UTC m=+0.281406558,LastTimestamp:2024-09-20 19:56:18.635653387 +0000 UTC m=+0.281406558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-389954,}"
	Sep 20 19:59:38 pause-389954 kubelet[8535]: E0920 19:59:38.690867    8535 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862378690460462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:59:38 pause-389954 kubelet[8535]: E0920 19:59:38.690974    8535 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862378690460462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:59:41 pause-389954 kubelet[8535]: I0920 19:59:41.284295    8535 kubelet_node_status.go:72] "Attempting to register node" node="pause-389954"
	Sep 20 19:59:41 pause-389954 kubelet[8535]: E0920 19:59:41.285477    8535 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.60:8443: connect: connection refused" node="pause-389954"
	Sep 20 19:59:42 pause-389954 kubelet[8535]: E0920 19:59:42.272074    8535 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-389954?timeout=10s\": dial tcp 192.168.39.60:8443: connect: connection refused" interval="7s"
	Sep 20 19:59:43 pause-389954 kubelet[8535]: W0920 19:59:43.034896    8535 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.60:8443: connect: connection refused
	Sep 20 19:59:43 pause-389954 kubelet[8535]: E0920 19:59:43.035276    8535 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.60:8443: connect: connection refused" logger="UnhandledError"
	Sep 20 19:59:43 pause-389954 kubelet[8535]: E0920 19:59:43.634905    8535 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-pause-389954_kube-system_1a42b0ade755bba4195bbbe3d69b32fa_1\" is already in use by 714803d576cd26c08a954d010442961a5e0b8032e09184f35d43aaf294585d5e. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="afda9b1872d7ecacbfe34045cb2cae39175207d4e5d8369e08443282baa0ad0d"
	Sep 20 19:59:43 pause-389954 kubelet[8535]: E0920 19:59:43.635100    8535 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.15-0,Command:[etcd --advertise-client-urls=https://192.168.39.60:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.39.60:2380 --initial-cluster=pause-389954=https://192.168.39.60:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.39.60:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.39.60:2380 --name=pause-389954 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt --proxy-refr
esh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:Prob
eHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-pause-389954_kube-system(1a42b0ade755bba4195bbbe3d69b32fa): CreateContainerError: the container name \"k8s_e
tcd_etcd-pause-389954_kube-system_1a42b0ade755bba4195bbbe3d69b32fa_1\" is already in use by 714803d576cd26c08a954d010442961a5e0b8032e09184f35d43aaf294585d5e. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Sep 20 19:59:43 pause-389954 kubelet[8535]: E0920 19:59:43.636419    8535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-pause-389954_kube-system_1a42b0ade755bba4195bbbe3d69b32fa_1\\\" is already in use by 714803d576cd26c08a954d010442961a5e0b8032e09184f35d43aaf294585d5e. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-pause-389954" podUID="1a42b0ade755bba4195bbbe3d69b32fa"
	Sep 20 19:59:48 pause-389954 kubelet[8535]: I0920 19:59:48.287808    8535 kubelet_node_status.go:72] "Attempting to register node" node="pause-389954"
	Sep 20 19:59:48 pause-389954 kubelet[8535]: E0920 19:59:48.289079    8535 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.60:8443: connect: connection refused" node="pause-389954"
	Sep 20 19:59:48 pause-389954 kubelet[8535]: E0920 19:59:48.492275    8535 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.60:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-389954.17f70c008da2a10b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-389954,UID:pause-389954,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node pause-389954 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:pause-389954,},FirstTimestamp:2024-09-20 19:56:18.635653387 +0000 UTC m=+0.281406558,LastTimestamp:2024-09-20 19:56:18.635653387 +0000 UTC m=+0.281406558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-389954,}"
	Sep 20 19:59:48 pause-389954 kubelet[8535]: E0920 19:59:48.693862    8535 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862388693100923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:59:48 pause-389954 kubelet[8535]: E0920 19:59:48.693989    8535 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862388693100923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:59:49 pause-389954 kubelet[8535]: E0920 19:59:49.274388    8535 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-389954?timeout=10s\": dial tcp 192.168.39.60:8443: connect: connection refused" interval="7s"
	Sep 20 19:59:49 pause-389954 kubelet[8535]: I0920 19:59:49.625280    8535 scope.go:117] "RemoveContainer" containerID="90472bad1ce3ed618f2e2ae9693fddd8a6d50eeeef9f9a54e7878e9fe7d6ef43"
	Sep 20 19:59:49 pause-389954 kubelet[8535]: E0920 19:59:49.625437    8535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-pause-389954_kube-system(e1feaa20bf576398e4137794021009f8)\"" pod="kube-system/kube-apiserver-pause-389954" podUID="e1feaa20bf576398e4137794021009f8"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-389954 -n pause-389954
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-389954 -n pause-389954: exit status 2 (216.561524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-389954" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (818.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.055s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
E0920 20:08:54.256720  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/custom-flannel-010370/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
E0920 20:09:55.905319  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
E0920 20:10:01.124155  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/calico-010370/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
E0920 20:10:02.256848  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/enable-default-cni-010370/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
E0920 20:10:17.322660  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/custom-flannel-010370/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
E0920 20:10:33.372825  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/flannel-010370/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
E0920 20:11:11.144700  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/bridge-010370/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
E0920 20:11:24.180240  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
E0920 20:11:25.320098  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/enable-default-cni-010370/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
E0920 20:11:26.979723  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/auto-010370/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
E0920 20:11:56.436559  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/flannel-010370/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
E0920 20:12:00.370033  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/kindnet-010370/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.161:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.161:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (30m18s)
		TestNetworkPlugins/group (20m51s)
		TestStartStop (27m39s)
		TestStartStop/group/default-k8s-diff-port (12m37s)
		TestStartStop/group/default-k8s-diff-port/serial (12m37s)
		TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7m47s)
		TestStartStop/group/embed-certs (20m51s)
		TestStartStop/group/embed-certs/serial (20m51s)
		TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6m26s)
		TestStartStop/group/no-preload (21m21s)
		TestStartStop/group/no-preload/serial (21m21s)
		TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6m41s)
		TestStartStop/group/old-k8s-version (21m38s)
		TestStartStop/group/old-k8s-version/serial (21m38s)
		TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (3m41s)

                                                
                                                
goroutine 7201 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 28 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00069c340, 0xc0005efbc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc00088c078, {0x4585140, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x411b30?, 0x4641680?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc000726c80)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000726c80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000656f80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 166 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000544c00, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 133
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 165 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 133
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 5368 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001be84d0, 0x15)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001c4cd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001be8500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001ba02f0, {0x3204700, 0xc001d722a0}, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001ba02f0, 0x3b9aca00, 0x0, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5378
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4825 [chan receive, 21 minutes]:
testing.(*T).Run(0xc0014a11e0, {0x258e05c?, 0x0?}, 0xc0017eaa00)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014a11e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0014a11e0, 0xc000890b80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4821
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2572 [chan send, 95 minutes]:
os/exec.(*Cmd).watchCtx(0xc001ba7500, 0xc001bf5260)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 2571
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 5873 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000972a00, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5868
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 5938 [syscall, 8 minutes]:
syscall.Syscall6(0xf7, 0x3, 0x15, 0xc0012a4b30, 0x4, 0xc0008f2120, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc001d22288?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000206600)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc000206600)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014a16c0, 0xc000206600)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x3228a38, 0xc0004c4000}, 0xc0014a16c0, {0xc001ab6000, 0x1c}, {0x0?, 0xc001d3bf60?}, {0x559033?, 0x4b162f?}, {0xc0001ceb00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0014a16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0014a16c0, 0xc0001c4600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 5863
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1938 [IO wait, 98 minutes]:
internal/poll.runtime_pollWait(0x7f0620219e98, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0001c4580?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001c4580)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0001c4580)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00196e0c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00196e0c0)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc001650f00, {0x321c370, 0xc00196e0c0})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc001650f00)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0000389c0?, 0xc000038d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1919
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 6288 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6287
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 144 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000544bd0, 0x2c)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0006d5d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000544c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000502a40, {0x3204700, 0xc000284b70}, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000502a40, 0x3b9aca00, 0x0, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 166
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 5700 [chan receive, 6 minutes]:
testing.(*T).Run(0xc000039380, {0x25b1fc2?, 0xc001d37d70?}, 0xc001cb0280)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000039380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000039380, 0xc00059b300)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4827
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 145 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc0005941c0}, 0xc000507750, 0xc000b5ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc0005941c0}, 0x0?, 0xc000507750, 0xc000507798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc0005941c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005077d0?, 0x593ba4?, 0xc000a564b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 166
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 178 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 145
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4262 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc000640730)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1666 +0x5e5
testing.tRunner(0xc0014a04e0, 0xc0008b0048)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 4185
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5891 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0009729d0, 0x2)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0016b7d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000972a00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001cfe000, {0x3204700, 0xc001cce0f0}, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001cfe000, 0x3b9aca00, 0x0, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5873
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 5765 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5739
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 6289 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000972a40, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6287
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2130 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2017
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 5945 [IO wait]:
internal/poll.runtime_pollWait(0x7f0618477658, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00088fe00?, 0xc001482800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00088fe00, {0xc001482800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc00088fe00, {0xc001482800?, 0x10?, 0xc0012d08a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0005f7848, {0xc001482800?, 0xc00148285f?, 0x6f?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001d55e78, {0xc001482800?, 0x0?, 0xc001d55e78?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0013a82b8, {0x3204d40, 0xc001d55e78})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0013a8008, {0x7f061a7eab38, 0xc001d225b8}, 0xc0012d0a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0013a8008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0013a8008, {0xc0014bc000, 0x1000, 0xc00209a540?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc0012ae960, {0xc000142900, 0x9, 0x4555740?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3203260, 0xc0012ae960}, {0xc000142900, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc000142900, 0x9, 0x47b965?}, {0x3203260?, 0xc0012ae960?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001428c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0012d0fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00020ac00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 5944
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 5157 [chan receive, 24 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0017ce280, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5172
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 6322 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6241
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 5118 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc0005941c0}, 0xc00132ff50, 0xc000b64f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc0005941c0}, 0x7?, 0xc00132ff50, 0xc00132ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc0005941c0?}, 0xc00069dba0?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00132ffd0?, 0x593ba4?, 0xc001cceb10?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5157
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2813 [select, 94 minutes]:
net/http.(*persistConn).writeLoop(0xc001d20c60)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 2810
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 4250 [chan receive, 28 minutes]:
testing.(*T).Run(0xc0000396c0, {0x258cd17?, 0x559033?}, 0x2f08530)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0000396c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0000396c0, 0x2f08338)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2402 [chan send, 94 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a67980, 0xc001d560e0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 2023
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 5939 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x7f0620219b80, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001444720?, 0xc001aa8b0d?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001444720, {0xc001aa8b0d, 0x4f3, 0x4f3})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0005f76e8, {0xc001aa8b0d?, 0x20d3680?, 0x215?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001c94150, {0x3203040, 0xc001c96078})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x32031c0, 0xc001c94150}, {0x3203040, 0xc001c96078}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0005f76e8?, {0x32031c0, 0xc001c94150})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0005f76e8, {0x32031c0, 0xc001c94150})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x32031c0, 0xc001c94150}, {0x32030c0, 0xc0005f76e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0001c4600?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 5938
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 5863 [chan receive, 8 minutes]:
testing.(*T).Run(0xc0013ae1a0, {0x25987af?, 0xc00137c570?}, 0xc0001c4600)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0013ae1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0013ae1a0, 0xc00088e500)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4824
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5893 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5892
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2016 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008917d0, 0x27)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001488d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000891800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000741b70, {0x3204700, 0xc0021ae1b0}, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000741b70, 0x3b9aca00, 0x0, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2119
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 5955 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3228a38, 0xc00048af50}, {0x321c9d0, 0xc001daf620}, 0x1, 0x0, 0xc001c77c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3228a38?, 0xc000199a40?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x3228a38, 0xc000199a40}, 0xc0012f49c0, {0xc001854ee8, 0x11}, {0x25ad332, 0x14}, {0x25c0743, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x3228a38, 0xc000199a40}, 0xc0012f49c0, {0xc001854ee8, 0x11}, {0x2596bdf?, 0xc001d37760?}, {0x559033?, 0x4b162f?}, {0xc0001ce900, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0012f49c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0012f49c0, 0xc00088ea00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 5618
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2375 [chan send, 95 minutes]:
os/exec.(*Cmd).watchCtx(0xc0019abc80, 0xc0019e2cb0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 2374
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2812 [select, 94 minutes]:
net/http.(*persistConn).readLoop(0xc001d20c60)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 2810
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 2017 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc0005941c0}, 0xc00050b750, 0xc001489f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc0005941c0}, 0x60?, 0xc00050b750, 0xc00050b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc0005941c0?}, 0xc0012f51e0?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc0007a8780?, 0xc00071a460?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2119
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4822 [chan receive, 21 minutes]:
testing.(*T).Run(0xc0014a0340, {0x258e05c?, 0x0?}, 0xc0017ea000)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014a0340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0014a0340, 0xc000890a40)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4821
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4185 [chan receive, 30 minutes]:
testing.(*T).Run(0xc000038000, {0x258cd17?, 0x55917c?}, 0xc0008b0048)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000038000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc000038000, 0x2f082f0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4821 [chan receive, 28 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0000fc000, 0x2f08530)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 4250
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5605 [chan receive, 21 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001be8100, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5584
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2118 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2044
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 5378 [chan receive, 23 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001be8500, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5356
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 5529 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5528
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4872 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4871
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 5766 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007258c0, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5739
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 5618 [chan receive, 8 minutes]:
testing.(*T).Run(0xc00069dd40, {0x25b1fc2?, 0x6e6550223a226573?}, 0xc00088ea00)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00069dd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00069dd40, 0xc0017eaa00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4825
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5892 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc0005941c0}, 0xc00148af50, 0xc00148af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc0005941c0}, 0xc0?, 0xc00148af50, 0xc00148af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc0005941c0?}, 0xc00069db01?, 0xc0005941c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013797d0?, 0x593ba4?, 0xc001bf41c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5873
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2119 [chan receive, 96 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000891800, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2044
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 5369 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc0005941c0}, 0xc001875f50, 0xc001875f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc0005941c0}, 0x30?, 0xc001875f50, 0xc001875f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc0005941c0?}, 0xc000038ea0?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc000206180?, 0xc001d56230?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5378
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4827 [chan receive, 20 minutes]:
testing.(*T).Run(0xc0014a1520, {0x258e05c?, 0x0?}, 0xc00059b300)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014a1520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0014a1520, 0xc000890f00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4821
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5753 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc0005941c0}, 0xc00132c750, 0xc00132c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc0005941c0}, 0x0?, 0xc00132c750, 0xc00132c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc0005941c0?}, 0x9e92b6?, 0xc001b9e480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001b9e480?, 0x13?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5766
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4857 [chan receive, 26 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001be85c0, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4852
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4824 [chan receive, 12 minutes]:
testing.(*T).Run(0xc0014a1040, {0x258e05c?, 0x0?}, 0xc00088e500)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014a1040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0014a1040, 0xc000890b40)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4821
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5386 [chan receive, 23 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001be9bc0, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5384
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 5527 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001be80d0, 0x14)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0000add80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001be8100)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005028f0, {0x3204700, 0xc001c5ac60}, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005028f0, 0x3b9aca00, 0x0, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5605
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 5604 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5584
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 5117 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0017ce250, 0x15)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001696d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0017ce280)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014465b0, {0x3204700, 0xc001ccec60}, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014465b0, 0x3b9aca00, 0x0, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5157
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4867 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001be8590, 0x16)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0000aad80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001be85c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001668010, {0x3204700, 0xc000744150}, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001668010, 0x3b9aca00, 0x0, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4857
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 5385 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5384
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4856 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4852
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 5070 [chan receive, 24 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001bba700, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5068
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4868 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc0005941c0}, 0xc001302f50, 0xc001302f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc0005941c0}, 0x7?, 0xc001302f50, 0xc001302f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc0005941c0?}, 0xc0014a1a00?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005097d0?, 0x593ba4?, 0xc001d88240?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4857
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 5095 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001bba6d0, 0x15)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000b63d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001bba700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001bec200, {0x3204700, 0xc0021ae180}, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001bec200, 0x3b9aca00, 0x0, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5070
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4869 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4868
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 5249 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5356
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 5752 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0007257d0, 0x3)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0005ebd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007258c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00147d0e0, {0x3204700, 0xc0018197a0}, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00147d0e0, 0x3b9aca00, 0x0, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5766
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 5069 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5068
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4823 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000640730)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0014a0b60)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0014a0b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014a0b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0014a0b60, 0xc000890b00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4821
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 6287 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3228a38, 0xc00056f110}, {0x321c9d0, 0xc00150f140}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3228a38?, 0xc000696620?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x3228a38, 0xc000696620}, 0xc0012f5380, {0xc001854030, 0x16}, {0x25ad332, 0x14}, {0x25c0743, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x3228a38, 0xc000696620}, 0xc0012f5380, {0xc001854030, 0x16}, {0x25a0cf5?, 0xc00137c760?}, {0x559033?, 0x4b162f?}, {0xc000206180, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0012f5380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0012f5380, 0xc000656700)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 5510
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5510 [chan receive, 3 minutes]:
testing.(*T).Run(0xc00069d520, {0x25b1fc2?, 0xc001378d70?}, 0xc000656700)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00069d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00069d520, 0xc0017ea000)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4822
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4865 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4864
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 5754 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5753
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 5528 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc0005941c0}, 0xc000509f50, 0xc000509f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc0005941c0}, 0x0?, 0xc000509f50, 0xc000509f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc0005941c0?}, 0x9e92b6?, 0xc001975200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc001b9e780?, 0xc001c9e700?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5605
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 6240 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000972950, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000097580?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000972a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001bec020, {0x3204700, 0xc0018180c0}, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001bec020, 0x3b9aca00, 0x0, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6289
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4864 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc0005941c0}, 0xc001d3b750, 0xc001305f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc0005941c0}, 0xa0?, 0xc001d3b750, 0xc001d3b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc0005941c0?}, 0xc0013aed00?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001d3b7d0?, 0x593ba4?, 0xc0016718c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4873
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4873 [chan receive, 26 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009727c0, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4871
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 5156 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5172
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4863 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000972750, 0x16)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001693d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009727c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001cb3520, {0x3204700, 0xc0007c5680}, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001cb3520, 0x3b9aca00, 0x0, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4873
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 5119 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5118
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 5096 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc0005941c0}, 0xc001697f50, 0xc001697f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc0005941c0}, 0xc0?, 0xc001697f50, 0xc001697f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc0005941c0?}, 0xc0013ae1a0?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001d367d0?, 0x593ba4?, 0xc0008ecac0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5070
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 5097 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5096
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 5370 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5369
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 5389 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001be9b90, 0x15)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0000b1d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3242040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001be9bc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001bed730, {0x3204700, 0xc0021afce0}, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001bed730, 0x3b9aca00, 0x0, 0x1, 0xc0005941c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5386
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 5390 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc0005941c0}, 0xc001d39750, 0xc001d39798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc0005941c0}, 0x0?, 0xc001d39750, 0xc001d39798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc0005941c0?}, 0xc0008d1380?, 0xc0013b0420?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc0007a9e00?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5386
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 5391 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5390
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 5974 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3228a38, 0xc000238a80}, {0x321c9d0, 0xc001376060}, 0x1, 0x0, 0xc001c7bc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3228a38?, 0xc00044e4d0?}, 0x3b9aca00, 0xc0008a9e10?, 0x1, 0xc0008a9c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x3228a38, 0xc00044e4d0}, 0xc0012f4000, {0xc000530b58, 0x12}, {0x25ad332, 0x14}, {0x25c0743, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x3228a38, 0xc00044e4d0}, 0xc0012f4000, {0xc000530b58, 0x12}, {0x2598799?, 0xc001879760?}, {0x559033?, 0x4b162f?}, {0xc0001ce000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0012f4000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0012f4000, 0xc001cb0280)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 5700
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5872 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x321f520)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5868
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 5940 [IO wait]:
internal/poll.runtime_pollWait(0x7f0620219a78, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0014447e0?, 0xc001793973?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014447e0, {0xc001793973, 0x68d, 0x68d})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0005f7718, {0xc001793973?, 0x7f061a7be868?, 0xfe46?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001c94180, {0x3203040, 0xc000288230})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x32031c0, 0xc001c94180}, {0x3203040, 0xc000288230}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0005f7718?, {0x32031c0, 0xc001c94180})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0005f7718, {0x32031c0, 0xc001c94180})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x32031c0, 0xc001c94180}, {0x32030c0, 0xc0005f7718}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001d36fa8?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 5938
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 5941 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc000206600, 0xc000594620)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 5938
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 5934 [IO wait]:
internal/poll.runtime_pollWait(0x7f062021a4c8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001cb1100?, 0xc0004c7800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001cb1100, {0xc0004c7800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc001cb1100, {0xc0004c7800?, 0x9d68b2?, 0xc001c479a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001c96000, {0xc0004c7800?, 0xc0000bab00?, 0xc0004c7862?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001d55e18, {0xc0004c7800?, 0x0?, 0xc001d55e18?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0013a8d38, {0x3204d40, 0xc001d55e18})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0013a8a88, {0x3204220, 0xc001c96000}, 0xc001c47a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0013a8a88, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0013a8a88, {0xc0007c2000, 0x1000, 0xc00209a540?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001444e40, {0xc000142820, 0x9, 0x4555740?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3203260, 0xc001444e40}, {0xc000142820, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc000142820, 0x9, 0x47b965?}, {0x3203260?, 0xc001444e40?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001427e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001c47fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0018ec000)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 5933
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 6241 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3228c40, 0xc0005941c0}, 0xc0012a7f50, 0xc0012a7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3228c40, 0xc0005941c0}, 0x90?, 0xc0012a7f50, 0xc0012a7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3228c40?, 0xc0005941c0?}, 0xc0013ae9c0?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00187afd0?, 0x593ba4?, 0xc0015b2690?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6289
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                    

Test pass (168/221)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.25
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 6.42
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
22 TestOffline 81.72
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 127.98
31 TestAddons/serial/GCPAuth/Namespaces 2.32
35 TestAddons/parallel/InspektorGadget 11.19
39 TestAddons/parallel/Headlamp 19.61
40 TestAddons/parallel/CloudSpanner 5.66
41 TestAddons/parallel/LocalPath 11.25
42 TestAddons/parallel/NvidiaDevicePlugin 6.71
43 TestAddons/parallel/Yakd 11.97
44 TestAddons/StoppedEnableDisable 93.68
45 TestCertOptions 75.39
46 TestCertExpiration 306.26
48 TestForceSystemdFlag 65.27
49 TestForceSystemdEnv 96.65
51 TestKVMDriverInstallOrUpdate 3.38
55 TestErrorSpam/setup 39.91
56 TestErrorSpam/start 0.34
57 TestErrorSpam/status 0.73
58 TestErrorSpam/pause 1.62
59 TestErrorSpam/unpause 1.76
60 TestErrorSpam/stop 5.29
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 56.06
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 39.4
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
72 TestFunctional/serial/CacheCmd/cache/add_local 1.53
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 37.32
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 1.45
83 TestFunctional/serial/LogsFileCmd 1.45
84 TestFunctional/serial/InvalidService 4.11
86 TestFunctional/parallel/ConfigCmd 0.37
87 TestFunctional/parallel/DashboardCmd 128.08
88 TestFunctional/parallel/DryRun 0.27
89 TestFunctional/parallel/InternationalLanguage 0.14
90 TestFunctional/parallel/StatusCmd 0.78
94 TestFunctional/parallel/ServiceCmdConnect 10.57
95 TestFunctional/parallel/AddonsCmd 0.13
98 TestFunctional/parallel/SSHCmd 0.52
99 TestFunctional/parallel/CpCmd 1.39
101 TestFunctional/parallel/FileSync 0.2
102 TestFunctional/parallel/CertSync 1.14
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
110 TestFunctional/parallel/License 0.15
111 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
113 TestFunctional/parallel/ProfileCmd/profile_list 0.36
114 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
124 TestFunctional/parallel/Version/short 0.05
125 TestFunctional/parallel/Version/components 0.93
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
130 TestFunctional/parallel/ImageCommands/ImageBuild 2.22
131 TestFunctional/parallel/ImageCommands/Setup 1
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.61
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.25
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.93
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
142 TestFunctional/parallel/ServiceCmd/List 0.45
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
145 TestFunctional/parallel/MountCmd/any-port 61.37
146 TestFunctional/parallel/ServiceCmd/Format 0.29
147 TestFunctional/parallel/ServiceCmd/URL 0.28
148 TestFunctional/parallel/MountCmd/specific-port 1.85
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
150 TestFunctional/delete_echo-server_images 0.04
151 TestFunctional/delete_my-image_image 0.02
152 TestFunctional/delete_minikube_cached_images 0.02
156 TestMultiControlPlane/serial/StartCluster 191.37
157 TestMultiControlPlane/serial/DeployApp 5.05
158 TestMultiControlPlane/serial/PingHostFromPods 1.2
159 TestMultiControlPlane/serial/AddWorkerNode 55.46
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
162 TestMultiControlPlane/serial/CopyFile 12.8
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.18
171 TestMultiControlPlane/serial/RestartCluster 211.24
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
173 TestMultiControlPlane/serial/AddSecondaryNode 78.87
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
178 TestJSONOutput/start/Command 90.46
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.71
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.6
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.37
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.19
206 TestMainNoArgs 0.04
207 TestMinikubeProfile 88.42
210 TestMountStart/serial/StartWithMountFirst 27.34
211 TestMountStart/serial/VerifyMountFirst 0.37
212 TestMountStart/serial/StartWithMountSecond 27.84
213 TestMountStart/serial/VerifyMountSecond 0.36
214 TestMountStart/serial/DeleteFirst 0.68
215 TestMountStart/serial/VerifyMountPostDelete 0.37
216 TestMountStart/serial/Stop 1.66
217 TestMountStart/serial/RestartStopped 22.66
218 TestMountStart/serial/VerifyMountPostStop 0.37
221 TestMultiNode/serial/FreshStart2Nodes 116.7
222 TestMultiNode/serial/DeployApp2Nodes 3.55
223 TestMultiNode/serial/PingHostFrom2Pods 0.8
224 TestMultiNode/serial/AddNode 54.1
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.56
227 TestMultiNode/serial/CopyFile 7.06
228 TestMultiNode/serial/StopNode 2.29
229 TestMultiNode/serial/StartAfterStop 37.35
231 TestMultiNode/serial/DeleteNode 2.27
233 TestMultiNode/serial/RestartMultiNode 179.48
234 TestMultiNode/serial/ValidateNameConflict 44.61
241 TestScheduledStopUnix 115.02
245 TestRunningBinaryUpgrade 181.47
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
251 TestNoKubernetes/serial/StartWithK8s 119.77
252 TestNoKubernetes/serial/StartWithStopK8s 44.62
264 TestNoKubernetes/serial/Start 25.6
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
266 TestNoKubernetes/serial/ProfileList 0.98
267 TestNoKubernetes/serial/Stop 1.55
268 TestNoKubernetes/serial/StartNoArgs 68.81
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
270 TestStoppedBinaryUpgrade/Setup 0.36
271 TestStoppedBinaryUpgrade/Upgrade 109.99
280 TestPause/serial/Start 82.41
282 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
x
+
TestDownloadOnly/v1.20.0/json-events (9.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-675466 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-675466 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.245596117s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 18:12:37.436630  748497 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0920 18:12:37.436767  748497 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-675466
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-675466: exit status 85 (60.533517ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |          |
	|         | -p download-only-675466        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:12:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:12:28.230597  748509 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:12:28.230904  748509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:28.230915  748509 out.go:358] Setting ErrFile to fd 2...
	I0920 18:12:28.230922  748509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:28.231137  748509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	W0920 18:12:28.231279  748509 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19678-739831/.minikube/config/config.json: open /home/jenkins/minikube-integration/19678-739831/.minikube/config/config.json: no such file or directory
	I0920 18:12:28.231894  748509 out.go:352] Setting JSON to true
	I0920 18:12:28.232950  748509 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6898,"bootTime":1726849050,"procs":272,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:12:28.233056  748509 start.go:139] virtualization: kvm guest
	I0920 18:12:28.235829  748509 out.go:97] [download-only-675466] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 18:12:28.235938  748509 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 18:12:28.235981  748509 notify.go:220] Checking for updates...
	I0920 18:12:28.237418  748509 out.go:169] MINIKUBE_LOCATION=19678
	I0920 18:12:28.238812  748509 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:12:28.240242  748509 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:12:28.241585  748509 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:28.242960  748509 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 18:12:28.245321  748509 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 18:12:28.245534  748509 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:12:28.277679  748509 out.go:97] Using the kvm2 driver based on user configuration
	I0920 18:12:28.277717  748509 start.go:297] selected driver: kvm2
	I0920 18:12:28.277725  748509 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:12:28.278100  748509 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:12:28.278221  748509 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:12:28.293107  748509 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:12:28.293164  748509 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:12:28.293737  748509 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0920 18:12:28.293883  748509 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 18:12:28.293922  748509 cni.go:84] Creating CNI manager for ""
	I0920 18:12:28.293980  748509 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:12:28.293988  748509 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:12:28.294038  748509 start.go:340] cluster config:
	{Name:download-only-675466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-675466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:12:28.294203  748509 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:12:28.296063  748509 out.go:97] Downloading VM boot image ...
	I0920 18:12:28.296134  748509 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 18:12:31.091188  748509 out.go:97] Starting "download-only-675466" primary control-plane node in "download-only-675466" cluster
	I0920 18:12:31.091215  748509 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:12:31.120129  748509 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:12:31.120198  748509 cache.go:56] Caching tarball of preloaded images
	I0920 18:12:31.120387  748509 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:12:31.123096  748509 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 18:12:31.123114  748509 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0920 18:12:31.153909  748509 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-675466 host does not exist
	  To start a cluster, run: "minikube start -p download-only-675466"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-675466
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-363869 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-363869 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.422827084s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 18:12:44.187542  748497 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0920 18:12:44.187592  748497 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-363869
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-363869: exit status 85 (59.331351ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | -p download-only-675466        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| delete  | -p download-only-675466        | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
	| start   | -o=json --download-only        | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC |                     |
	|         | -p download-only-363869        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:12:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:12:37.805016  748715 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:12:37.805144  748715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:37.805155  748715 out.go:358] Setting ErrFile to fd 2...
	I0920 18:12:37.805163  748715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:37.805354  748715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:12:37.805973  748715 out.go:352] Setting JSON to true
	I0920 18:12:37.807085  748715 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6908,"bootTime":1726849050,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:12:37.807189  748715 start.go:139] virtualization: kvm guest
	I0920 18:12:37.809306  748715 out.go:97] [download-only-363869] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:12:37.809466  748715 notify.go:220] Checking for updates...
	I0920 18:12:37.810931  748715 out.go:169] MINIKUBE_LOCATION=19678
	I0920 18:12:37.812546  748715 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:12:37.813784  748715 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:12:37.814983  748715 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:12:37.816151  748715 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 18:12:37.818272  748715 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 18:12:37.818485  748715 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:12:37.850626  748715 out.go:97] Using the kvm2 driver based on user configuration
	I0920 18:12:37.850663  748715 start.go:297] selected driver: kvm2
	I0920 18:12:37.850671  748715 start.go:901] validating driver "kvm2" against <nil>
	I0920 18:12:37.851148  748715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:12:37.851269  748715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 18:12:37.866668  748715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 18:12:37.866723  748715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:12:37.867306  748715 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0920 18:12:37.867487  748715 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 18:12:37.867520  748715 cni.go:84] Creating CNI manager for ""
	I0920 18:12:37.867579  748715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 18:12:37.867598  748715 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 18:12:37.867672  748715 start.go:340] cluster config:
	{Name:download-only-363869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-363869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:12:37.867788  748715 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:12:37.869354  748715 out.go:97] Starting "download-only-363869" primary control-plane node in "download-only-363869" cluster
	I0920 18:12:37.869372  748715 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:12:37.894960  748715 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:12:37.894997  748715 cache.go:56] Caching tarball of preloaded images
	I0920 18:12:37.895173  748715 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:12:37.897120  748715 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 18:12:37.897148  748715 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0920 18:12:37.933586  748715 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:12:42.788177  748715 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0920 18:12:42.788281  748715 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-363869 host does not exist
	  To start a cluster, run: "minikube start -p download-only-363869"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-363869
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 18:12:44.762374  748497 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-747965 --alsologtostderr --binary-mirror http://127.0.0.1:39359 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-747965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-747965
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (81.72s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-666055 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-666055 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.564906964s)
helpers_test.go:175: Cleaning up "offline-crio-666055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-666055
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-666055: (1.15669935s)
--- PASS: TestOffline (81.72s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-446299
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-446299: exit status 85 (53.417687ms)

                                                
                                                
-- stdout --
	* Profile "addons-446299" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-446299"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-446299
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-446299: exit status 85 (53.682744ms)

                                                
                                                
-- stdout --
	* Profile "addons-446299" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-446299"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (127.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-446299 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-446299 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m7.98220499s)
--- PASS: TestAddons/Setup (127.98s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.32s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-446299 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-446299 get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context addons-446299 get secret gcp-auth -n new-namespace: exit status 1 (90.142877ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context addons-446299 logs -l app=gcp-auth -n gcp-auth
I0920 18:14:53.645367  748497 retry.go:31] will retry after 2.008836997s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/09/20 18:14:52 GCP Auth Webhook started!
	2024/09/20 18:14:53 Ready to marshal response ...
	2024/09/20 18:14:53 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:608: (dbg) Run:  kubectl --context addons-446299 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.32s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.19s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jb8hm" [096f4ed8-1f0c-4a0f-bfe7-ae402c930972] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00453989s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-446299
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-446299: (6.185801667s)
--- PASS: TestAddons/parallel/InspektorGadget (11.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-446299 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-9wkl6" [90eb0f87-3523-4102-abcc-3c8f04c9be35] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-9wkl6" [90eb0f87-3523-4102-abcc-3c8f04c9be35] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-9wkl6" [90eb0f87-3523-4102-abcc-3c8f04c9be35] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004920724s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-446299 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-446299 addons disable headlamp --alsologtostderr -v=1: (5.75752275s)
--- PASS: TestAddons/parallel/Headlamp (19.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-tmkzn" [e457fdeb-27fa-46a7-8824-d2433aa7f8b5] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004552661s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-446299
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-446299 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-446299 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-446299 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b146a67e-bc53-41eb-96ab-86a1af30e01a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b146a67e-bc53-41eb-96ab-86a1af30e01a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b146a67e-bc53-41eb-96ab-86a1af30e01a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004495221s
addons_test.go:938: (dbg) Run:  kubectl --context addons-446299 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-446299 ssh "cat /opt/local-path-provisioner/pvc-11168afa-d97c-4581-90a8-f19b354e2c35_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-446299 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-446299 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-446299 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.71s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6l2l2" [c6db8268-e330-413b-9107-88c63f861e42] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.024434301s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-446299
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.71s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-5tzxd" [097f4d79-4b43-4f8a-a749-cf61e70a821a] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.02417798s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-446299 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-446299 addons disable yakd --alsologtostderr -v=1: (5.948286522s)
--- PASS: TestAddons/parallel/Yakd (11.97s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (93.68s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-446299
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-446299: (1m33.411127958s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-446299
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-446299
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-446299
--- PASS: TestAddons/StoppedEnableDisable (93.68s)

                                                
                                    
x
+
TestCertOptions (75.39s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-059714 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0920 19:41:24.180091  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-059714 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m13.925208727s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-059714 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-059714 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-059714 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-059714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-059714
--- PASS: TestCertOptions (75.39s)

                                                
                                    
x
+
TestCertExpiration (306.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-741208 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-741208 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (44.431479062s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-741208 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-741208 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m20.995224475s)
helpers_test.go:175: Cleaning up "cert-expiration-741208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-741208
--- PASS: TestCertExpiration (306.26s)

                                                
                                    
x
+
TestForceSystemdFlag (65.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-569601 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0920 19:41:07.249039  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-569601 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.293767724s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-569601 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-569601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-569601
--- PASS: TestForceSystemdFlag (65.27s)

                                                
                                    
x
+
TestForceSystemdEnv (96.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-701002 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-701002 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m35.628074263s)
helpers_test.go:175: Cleaning up "force-systemd-env-701002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-701002
E0920 19:41:18.975916  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-701002: (1.018021174s)
--- PASS: TestForceSystemdEnv (96.65s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.38s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0920 19:42:19.484432  748497 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 19:42:19.484635  748497 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0920 19:42:19.521492  748497 install.go:62] docker-machine-driver-kvm2: exit status 1
W0920 19:42:19.522039  748497 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 19:42:19.522153  748497 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate829199509/001/docker-machine-driver-kvm2
I0920 19:42:19.837013  748497 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate829199509/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc0004ab240 gz:0xc0004ab248 tar:0xc0004ab170 tar.bz2:0xc0004ab180 tar.gz:0xc0004ab190 tar.xz:0xc0004ab1e0 tar.zst:0xc0004ab210 tbz2:0xc0004ab180 tgz:0xc0004ab190 txz:0xc0004ab1e0 tzst:0xc0004ab210 xz:0xc0004ab250 zip:0xc0004ab260 zst:0xc0004ab258] Getters:map[file:0xc001668da0 http:0xc00054aff0 https:0xc00054b040] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0920 19:42:19.837092  748497 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate829199509/001/docker-machine-driver-kvm2
I0920 19:42:21.248772  748497 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 19:42:21.248869  748497 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0920 19:42:21.281335  748497 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0920 19:42:21.281380  748497 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0920 19:42:21.281462  748497 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 19:42:21.281496  748497 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate829199509/002/docker-machine-driver-kvm2
I0920 19:42:21.447006  748497 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate829199509/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc0004ab240 gz:0xc0004ab248 tar:0xc0004ab170 tar.bz2:0xc0004ab180 tar.gz:0xc0004ab190 tar.xz:0xc0004ab1e0 tar.zst:0xc0004ab210 tbz2:0xc0004ab180 tgz:0xc0004ab190 txz:0xc0004ab1e0 tzst:0xc0004ab210 xz:0xc0004ab250 zip:0xc0004ab260 zst:0xc0004ab258] Getters:map[file:0xc001cfe290 http:0xc001cf0280 https:0xc001cf02d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0920 19:42:21.447062  748497 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate829199509/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.38s)

                                                
                                    
x
+
TestErrorSpam/setup (39.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-415565 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-415565 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-415565 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-415565 --driver=kvm2  --container-runtime=crio: (39.905976126s)
--- PASS: TestErrorSpam/setup (39.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (5.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 stop: (2.292746156s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 stop: (1.974829821s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-415565 --log_dir /tmp/nospam-415565 stop: (1.02359803s)
--- PASS: TestErrorSpam/stop (5.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19678-739831/.minikube/files/etc/test/nested/copy/748497/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-023857 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-023857 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (56.062184979s)
--- PASS: TestFunctional/serial/StartWithProxy (56.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.4s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 18:34:52.824370  748497 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-023857 --alsologtostderr -v=8
E0920 18:34:55.906221  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:34:55.912567  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:34:55.923906  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:34:55.945251  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:34:55.986686  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:34:56.068138  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:34:56.229741  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:34:56.551476  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:34:57.193640  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:34:58.474969  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:35:01.036632  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:35:06.158151  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:35:16.400143  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-023857 --alsologtostderr -v=8: (39.396869113s)
functional_test.go:663: soft start took 39.397498876s for "functional-023857" cluster.
I0920 18:35:32.221536  748497 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (39.40s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-023857 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-023857 cache add registry.k8s.io/pause:3.1: (1.080544573s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-023857 cache add registry.k8s.io/pause:3.3: (1.178911737s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-023857 cache add registry.k8s.io/pause:latest: (1.182956943s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-023857 /tmp/TestFunctionalserialCacheCmdcacheadd_local198235249/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 cache add minikube-local-cache-test:functional-023857
E0920 18:35:36.881488  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-023857 cache add minikube-local-cache-test:functional-023857: (1.208541172s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 cache delete minikube-local-cache-test:functional-023857
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-023857
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023857 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (210.852922ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-023857 cache reload: (1.002760717s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 kubectl -- --context functional-023857 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-023857 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-023857 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-023857 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.319593525s)
functional_test.go:761: restart took 37.319697127s for "functional-023857" cluster.
I0920 18:36:16.900041  748497 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (37.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-023857 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 logs
E0920 18:36:17.843121  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-023857 logs: (1.447325984s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 logs --file /tmp/TestFunctionalserialLogsFileCmd2298658029/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-023857 logs --file /tmp/TestFunctionalserialLogsFileCmd2298658029/001/logs.txt: (1.445912735s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-023857 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-023857
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-023857: exit status 115 (273.474894ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.93:31670 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-023857 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023857 config get cpus: exit status 14 (81.003597ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023857 config get cpus: exit status 14 (46.165221ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (128.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-023857 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-023857 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 759858: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (128.08s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-023857 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-023857 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (134.955269ms)

                                                
                                                
-- stdout --
	* [functional-023857] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:36:36.041267  759375 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:36:36.041393  759375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:36.041404  759375 out.go:358] Setting ErrFile to fd 2...
	I0920 18:36:36.041409  759375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:36.041620  759375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:36:36.042223  759375 out.go:352] Setting JSON to false
	I0920 18:36:36.043298  759375 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8346,"bootTime":1726849050,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:36:36.043405  759375 start.go:139] virtualization: kvm guest
	I0920 18:36:36.045570  759375 out.go:177] * [functional-023857] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:36:36.046868  759375 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:36:36.046878  759375 notify.go:220] Checking for updates...
	I0920 18:36:36.048323  759375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:36:36.049652  759375 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:36:36.050866  759375 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:36:36.052105  759375 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:36:36.053342  759375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:36:36.054910  759375 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:36:36.055308  759375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:36:36.055381  759375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:36:36.072091  759375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35643
	I0920 18:36:36.072455  759375 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:36:36.072988  759375 main.go:141] libmachine: Using API Version  1
	I0920 18:36:36.073013  759375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:36:36.073328  759375 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:36:36.073502  759375 main.go:141] libmachine: (functional-023857) Calling .DriverName
	I0920 18:36:36.073701  759375 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:36:36.073991  759375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:36:36.074052  759375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:36:36.088653  759375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39705
	I0920 18:36:36.089106  759375 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:36:36.089639  759375 main.go:141] libmachine: Using API Version  1
	I0920 18:36:36.089677  759375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:36:36.090001  759375 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:36:36.090224  759375 main.go:141] libmachine: (functional-023857) Calling .DriverName
	I0920 18:36:36.126210  759375 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:36:36.127548  759375 start.go:297] selected driver: kvm2
	I0920 18:36:36.127565  759375 start.go:901] validating driver "kvm2" against &{Name:functional-023857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-023857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.93 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:36:36.127685  759375 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:36:36.129771  759375 out.go:201] 
	W0920 18:36:36.130897  759375 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 18:36:36.132107  759375 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-023857 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-023857 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-023857 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (134.760017ms)

                                                
                                                
-- stdout --
	* [functional-023857] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:36:37.808076  759778 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:36:37.808339  759778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:37.808349  759778 out.go:358] Setting ErrFile to fd 2...
	I0920 18:36:37.808354  759778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:37.808664  759778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 18:36:37.809227  759778 out.go:352] Setting JSON to false
	I0920 18:36:37.810275  759778 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8348,"bootTime":1726849050,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:36:37.810374  759778 start.go:139] virtualization: kvm guest
	I0920 18:36:37.812436  759778 out.go:177] * [functional-023857] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0920 18:36:37.813834  759778 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:36:37.813841  759778 notify.go:220] Checking for updates...
	I0920 18:36:37.816359  759778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:36:37.817759  759778 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	I0920 18:36:37.819129  759778 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	I0920 18:36:37.820349  759778 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:36:37.821722  759778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:36:37.823321  759778 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:36:37.823743  759778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:36:37.823814  759778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:36:37.839986  759778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0920 18:36:37.840459  759778 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:36:37.841068  759778 main.go:141] libmachine: Using API Version  1
	I0920 18:36:37.841109  759778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:36:37.841456  759778 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:36:37.841679  759778 main.go:141] libmachine: (functional-023857) Calling .DriverName
	I0920 18:36:37.841937  759778 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:36:37.842235  759778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 18:36:37.842275  759778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:36:37.857372  759778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I0920 18:36:37.857848  759778 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:36:37.858408  759778 main.go:141] libmachine: Using API Version  1
	I0920 18:36:37.858449  759778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:36:37.858816  759778 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:36:37.859004  759778 main.go:141] libmachine: (functional-023857) Calling .DriverName
	I0920 18:36:37.890577  759778 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0920 18:36:37.891878  759778 start.go:297] selected driver: kvm2
	I0920 18:36:37.891907  759778 start.go:901] validating driver "kvm2" against &{Name:functional-023857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-023857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.93 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:36:37.892031  759778 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:36:37.894176  759778 out.go:201] 
	W0920 18:36:37.895327  759778 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 18:36:37.896536  759778 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-023857 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-023857 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hj76w" [358194f5-7d44-4bf1-9c90-a57f0079a0a3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hj76w" [358194f5-7d44-4bf1-9c90-a57f0079a0a3] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.016179494s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.93:31020
functional_test.go:1675: http://192.168.39.93:31020: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-hj76w

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.93:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.93:31020
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh -n functional-023857 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 cp functional-023857:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd834808758/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh -n functional-023857 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh -n functional-023857 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/748497/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "sudo cat /etc/test/nested/copy/748497/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/748497.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "sudo cat /etc/ssl/certs/748497.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/748497.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "sudo cat /usr/share/ca-certificates/748497.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7484972.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "sudo cat /etc/ssl/certs/7484972.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7484972.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "sudo cat /usr/share/ca-certificates/7484972.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-023857 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023857 ssh "sudo systemctl is-active docker": exit status 1 (252.565509ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023857 ssh "sudo systemctl is-active containerd": exit status 1 (249.754738ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-023857 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-023857 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-7rbf2" [d7c14f12-78fa-492c-a893-12ea14bbaa08] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-7rbf2" [d7c14f12-78fa-492c-a893-12ea14bbaa08] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003755156s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "305.215944ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "54.471321ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "297.033105ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "48.087184ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-023857 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-023857
localhost/kicbase/echo-server:functional-023857
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-023857 image ls --format short --alsologtostderr:
I0920 18:37:42.318498  760556 out.go:345] Setting OutFile to fd 1 ...
I0920 18:37:42.318811  760556 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:37:42.318823  760556 out.go:358] Setting ErrFile to fd 2...
I0920 18:37:42.318828  760556 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:37:42.319114  760556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
I0920 18:37:42.319853  760556 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:37:42.319984  760556 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:37:42.320408  760556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:37:42.320463  760556 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:37:42.336140  760556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34469
I0920 18:37:42.336760  760556 main.go:141] libmachine: () Calling .GetVersion
I0920 18:37:42.337422  760556 main.go:141] libmachine: Using API Version  1
I0920 18:37:42.337442  760556 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:37:42.337819  760556 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:37:42.338037  760556 main.go:141] libmachine: (functional-023857) Calling .GetState
I0920 18:37:42.340011  760556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:37:42.340066  760556 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:37:42.355093  760556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
I0920 18:37:42.355594  760556 main.go:141] libmachine: () Calling .GetVersion
I0920 18:37:42.356040  760556 main.go:141] libmachine: Using API Version  1
I0920 18:37:42.356059  760556 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:37:42.356411  760556 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:37:42.356656  760556 main.go:141] libmachine: (functional-023857) Calling .DriverName
I0920 18:37:42.356873  760556 ssh_runner.go:195] Run: systemctl --version
I0920 18:37:42.356914  760556 main.go:141] libmachine: (functional-023857) Calling .GetSSHHostname
I0920 18:37:42.359786  760556 main.go:141] libmachine: (functional-023857) DBG | domain functional-023857 has defined MAC address 52:54:00:48:f3:a4 in network mk-functional-023857
I0920 18:37:42.360268  760556 main.go:141] libmachine: (functional-023857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:f3:a4", ip: ""} in network mk-functional-023857: {Iface:virbr1 ExpiryTime:2024-09-20 19:34:11 +0000 UTC Type:0 Mac:52:54:00:48:f3:a4 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:functional-023857 Clientid:01:52:54:00:48:f3:a4}
I0920 18:37:42.360297  760556 main.go:141] libmachine: (functional-023857) DBG | domain functional-023857 has defined IP address 192.168.39.93 and MAC address 52:54:00:48:f3:a4 in network mk-functional-023857
I0920 18:37:42.360596  760556 main.go:141] libmachine: (functional-023857) Calling .GetSSHPort
I0920 18:37:42.360788  760556 main.go:141] libmachine: (functional-023857) Calling .GetSSHKeyPath
I0920 18:37:42.360938  760556 main.go:141] libmachine: (functional-023857) Calling .GetSSHUsername
I0920 18:37:42.361081  760556 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/functional-023857/id_rsa Username:docker}
I0920 18:37:42.454393  760556 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 18:37:42.498786  760556 main.go:141] libmachine: Making call to close driver server
I0920 18:37:42.498804  760556 main.go:141] libmachine: (functional-023857) Calling .Close
I0920 18:37:42.499123  760556 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:37:42.499143  760556 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:37:42.499152  760556 main.go:141] libmachine: Making call to close driver server
I0920 18:37:42.499161  760556 main.go:141] libmachine: (functional-023857) Calling .Close
I0920 18:37:42.499374  760556 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:37:42.499388  760556 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-023857 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| localhost/kicbase/echo-server           | functional-023857  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| localhost/minikube-local-cache-test     | functional-023857  | 9d62fb621d4b1 | 3.33kB |
| localhost/my-image                      | functional-023857  | b5b95f6beced4 | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-023857 image ls --format table --alsologtostderr:
I0920 18:37:45.216027  760727 out.go:345] Setting OutFile to fd 1 ...
I0920 18:37:45.216163  760727 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:37:45.216173  760727 out.go:358] Setting ErrFile to fd 2...
I0920 18:37:45.216178  760727 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:37:45.216391  760727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
I0920 18:37:45.217030  760727 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:37:45.217144  760727 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:37:45.217594  760727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:37:45.217645  760727 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:37:45.233278  760727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45843
I0920 18:37:45.233903  760727 main.go:141] libmachine: () Calling .GetVersion
I0920 18:37:45.234580  760727 main.go:141] libmachine: Using API Version  1
I0920 18:37:45.234608  760727 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:37:45.234993  760727 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:37:45.235190  760727 main.go:141] libmachine: (functional-023857) Calling .GetState
I0920 18:37:45.237092  760727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:37:45.237138  760727 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:37:45.252150  760727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38219
I0920 18:37:45.252604  760727 main.go:141] libmachine: () Calling .GetVersion
I0920 18:37:45.253115  760727 main.go:141] libmachine: Using API Version  1
I0920 18:37:45.253139  760727 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:37:45.253442  760727 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:37:45.253638  760727 main.go:141] libmachine: (functional-023857) Calling .DriverName
I0920 18:37:45.253876  760727 ssh_runner.go:195] Run: systemctl --version
I0920 18:37:45.253903  760727 main.go:141] libmachine: (functional-023857) Calling .GetSSHHostname
I0920 18:37:45.256632  760727 main.go:141] libmachine: (functional-023857) DBG | domain functional-023857 has defined MAC address 52:54:00:48:f3:a4 in network mk-functional-023857
I0920 18:37:45.257110  760727 main.go:141] libmachine: (functional-023857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:f3:a4", ip: ""} in network mk-functional-023857: {Iface:virbr1 ExpiryTime:2024-09-20 19:34:11 +0000 UTC Type:0 Mac:52:54:00:48:f3:a4 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:functional-023857 Clientid:01:52:54:00:48:f3:a4}
I0920 18:37:45.257147  760727 main.go:141] libmachine: (functional-023857) DBG | domain functional-023857 has defined IP address 192.168.39.93 and MAC address 52:54:00:48:f3:a4 in network mk-functional-023857
I0920 18:37:45.257207  760727 main.go:141] libmachine: (functional-023857) Calling .GetSSHPort
I0920 18:37:45.257383  760727 main.go:141] libmachine: (functional-023857) Calling .GetSSHKeyPath
I0920 18:37:45.257494  760727 main.go:141] libmachine: (functional-023857) Calling .GetSSHUsername
I0920 18:37:45.257610  760727 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/functional-023857/id_rsa Username:docker}
I0920 18:37:45.338065  760727 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 18:37:45.375706  760727 main.go:141] libmachine: Making call to close driver server
I0920 18:37:45.375736  760727 main.go:141] libmachine: (functional-023857) Calling .Close
I0920 18:37:45.376027  760727 main.go:141] libmachine: (functional-023857) DBG | Closing plugin on server side
I0920 18:37:45.376036  760727 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:37:45.376062  760727 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:37:45.376071  760727 main.go:141] libmachine: Making call to close driver server
I0920 18:37:45.376079  760727 main.go:141] libmachine: (functional-023857) Calling .Close
I0920 18:37:45.376323  760727 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:37:45.376339  760727 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-023857 image ls --format json --alsologtostderr:
[{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2
e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"c5026825fe61e77b891429829fc3903dfd77835c0b9985f03b21fdadd89d9ab9","repoDigests":["docker.io/library/e68468a618f188296db4bbb9da32eeca8f9628831874beb6384b6981dac7b247-tmp@sha256:590391796833f901c1051cf9ef31b15762ce7e4270fe11a4a1be8f2a1b2afe22"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641
cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"b5b95f6beced4bd381249e44ba016648b84b8d6443433dd2e455214fa7cd34f2","repoDigests":["localhost/my-image@sha256:a64d45ce2adea0bf2a3e8d5f075a4a92a85210c1904e78cfc9e2bf8ed5525abd"],"repoTags":["localhost/my-image:functional-023857"],"size":
"1468600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092c
e206e98765c"],"repoTags":[],"size":"43824855"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-023857"],"size":"4943877"},{"id":"9d62fb621d4b1b6409c2def9d2dc07eb065c4b46f867a9156aece0a75096a683","repoDigests":["localhost/minikube-local-cache-test@sha256:100d10a222c0155c191448d2f21ed31209b77a33d1e6d0308b38d78fe7cdc3d3"],"repoTags":["localhost/minikube-local-cache-test:functional-023857"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"60c005f310ff3ad6d131805170f07d29460953
07063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k
8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-023857 image ls --format json --alsologtostderr:
I0920 18:37:45.002298  760703 out.go:345] Setting OutFile to fd 1 ...
I0920 18:37:45.002563  760703 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:37:45.002578  760703 out.go:358] Setting ErrFile to fd 2...
I0920 18:37:45.002583  760703 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:37:45.002776  760703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
I0920 18:37:45.003423  760703 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:37:45.003521  760703 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:37:45.003917  760703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:37:45.003969  760703 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:37:45.019570  760703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44949
I0920 18:37:45.020086  760703 main.go:141] libmachine: () Calling .GetVersion
I0920 18:37:45.020688  760703 main.go:141] libmachine: Using API Version  1
I0920 18:37:45.020714  760703 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:37:45.021075  760703 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:37:45.021269  760703 main.go:141] libmachine: (functional-023857) Calling .GetState
I0920 18:37:45.023154  760703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:37:45.023197  760703 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:37:45.038196  760703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
I0920 18:37:45.038689  760703 main.go:141] libmachine: () Calling .GetVersion
I0920 18:37:45.039190  760703 main.go:141] libmachine: Using API Version  1
I0920 18:37:45.039207  760703 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:37:45.039518  760703 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:37:45.039719  760703 main.go:141] libmachine: (functional-023857) Calling .DriverName
I0920 18:37:45.040002  760703 ssh_runner.go:195] Run: systemctl --version
I0920 18:37:45.040033  760703 main.go:141] libmachine: (functional-023857) Calling .GetSSHHostname
I0920 18:37:45.042881  760703 main.go:141] libmachine: (functional-023857) DBG | domain functional-023857 has defined MAC address 52:54:00:48:f3:a4 in network mk-functional-023857
I0920 18:37:45.043275  760703 main.go:141] libmachine: (functional-023857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:f3:a4", ip: ""} in network mk-functional-023857: {Iface:virbr1 ExpiryTime:2024-09-20 19:34:11 +0000 UTC Type:0 Mac:52:54:00:48:f3:a4 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:functional-023857 Clientid:01:52:54:00:48:f3:a4}
I0920 18:37:45.043296  760703 main.go:141] libmachine: (functional-023857) DBG | domain functional-023857 has defined IP address 192.168.39.93 and MAC address 52:54:00:48:f3:a4 in network mk-functional-023857
I0920 18:37:45.043447  760703 main.go:141] libmachine: (functional-023857) Calling .GetSSHPort
I0920 18:37:45.043607  760703 main.go:141] libmachine: (functional-023857) Calling .GetSSHKeyPath
I0920 18:37:45.043781  760703 main.go:141] libmachine: (functional-023857) Calling .GetSSHUsername
I0920 18:37:45.043930  760703 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/functional-023857/id_rsa Username:docker}
I0920 18:37:45.121856  760703 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 18:37:45.166614  760703 main.go:141] libmachine: Making call to close driver server
I0920 18:37:45.166638  760703 main.go:141] libmachine: (functional-023857) Calling .Close
I0920 18:37:45.167027  760703 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:37:45.167052  760703 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:37:45.167081  760703 main.go:141] libmachine: Making call to close driver server
I0920 18:37:45.167101  760703 main.go:141] libmachine: (functional-023857) Calling .Close
I0920 18:37:45.167102  760703 main.go:141] libmachine: (functional-023857) DBG | Closing plugin on server side
I0920 18:37:45.167396  760703 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:37:45.167425  760703 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-023857 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-023857
size: "4943877"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 9d62fb621d4b1b6409c2def9d2dc07eb065c4b46f867a9156aece0a75096a683
repoDigests:
- localhost/minikube-local-cache-test@sha256:100d10a222c0155c191448d2f21ed31209b77a33d1e6d0308b38d78fe7cdc3d3
repoTags:
- localhost/minikube-local-cache-test:functional-023857
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-023857 image ls --format yaml --alsologtostderr:
I0920 18:37:42.549677  760579 out.go:345] Setting OutFile to fd 1 ...
I0920 18:37:42.549950  760579 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:37:42.549960  760579 out.go:358] Setting ErrFile to fd 2...
I0920 18:37:42.549965  760579 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:37:42.550158  760579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
I0920 18:37:42.550840  760579 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:37:42.550996  760579 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:37:42.551428  760579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:37:42.551478  760579 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:37:42.567513  760579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34161
I0920 18:37:42.568097  760579 main.go:141] libmachine: () Calling .GetVersion
I0920 18:37:42.568762  760579 main.go:141] libmachine: Using API Version  1
I0920 18:37:42.568789  760579 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:37:42.569175  760579 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:37:42.569371  760579 main.go:141] libmachine: (functional-023857) Calling .GetState
I0920 18:37:42.571365  760579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:37:42.571420  760579 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:37:42.586768  760579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44813
I0920 18:37:42.587349  760579 main.go:141] libmachine: () Calling .GetVersion
I0920 18:37:42.587946  760579 main.go:141] libmachine: Using API Version  1
I0920 18:37:42.587975  760579 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:37:42.588386  760579 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:37:42.588588  760579 main.go:141] libmachine: (functional-023857) Calling .DriverName
I0920 18:37:42.588821  760579 ssh_runner.go:195] Run: systemctl --version
I0920 18:37:42.588854  760579 main.go:141] libmachine: (functional-023857) Calling .GetSSHHostname
I0920 18:37:42.592165  760579 main.go:141] libmachine: (functional-023857) DBG | domain functional-023857 has defined MAC address 52:54:00:48:f3:a4 in network mk-functional-023857
I0920 18:37:42.592653  760579 main.go:141] libmachine: (functional-023857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:f3:a4", ip: ""} in network mk-functional-023857: {Iface:virbr1 ExpiryTime:2024-09-20 19:34:11 +0000 UTC Type:0 Mac:52:54:00:48:f3:a4 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:functional-023857 Clientid:01:52:54:00:48:f3:a4}
I0920 18:37:42.592687  760579 main.go:141] libmachine: (functional-023857) DBG | domain functional-023857 has defined IP address 192.168.39.93 and MAC address 52:54:00:48:f3:a4 in network mk-functional-023857
I0920 18:37:42.592732  760579 main.go:141] libmachine: (functional-023857) Calling .GetSSHPort
I0920 18:37:42.592890  760579 main.go:141] libmachine: (functional-023857) Calling .GetSSHKeyPath
I0920 18:37:42.593035  760579 main.go:141] libmachine: (functional-023857) Calling .GetSSHUsername
I0920 18:37:42.593175  760579 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/functional-023857/id_rsa Username:docker}
I0920 18:37:42.682308  760579 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 18:37:42.730276  760579 main.go:141] libmachine: Making call to close driver server
I0920 18:37:42.730295  760579 main.go:141] libmachine: (functional-023857) Calling .Close
I0920 18:37:42.730600  760579 main.go:141] libmachine: (functional-023857) DBG | Closing plugin on server side
I0920 18:37:42.730611  760579 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:37:42.730624  760579 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:37:42.730631  760579 main.go:141] libmachine: Making call to close driver server
I0920 18:37:42.730637  760579 main.go:141] libmachine: (functional-023857) Calling .Close
I0920 18:37:42.730852  760579 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:37:42.730867  760579 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023857 ssh pgrep buildkitd: exit status 1 (193.885893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image build -t localhost/my-image:functional-023857 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-023857 image build -t localhost/my-image:functional-023857 testdata/build --alsologtostderr: (1.817693964s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-023857 image build -t localhost/my-image:functional-023857 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c5026825fe6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-023857
--> b5b95f6bece
Successfully tagged localhost/my-image:functional-023857
b5b95f6beced4bd381249e44ba016648b84b8d6443433dd2e455214fa7cd34f2
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-023857 image build -t localhost/my-image:functional-023857 testdata/build --alsologtostderr:
I0920 18:37:42.973147  760639 out.go:345] Setting OutFile to fd 1 ...
I0920 18:37:42.973395  760639 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:37:42.973404  760639 out.go:358] Setting ErrFile to fd 2...
I0920 18:37:42.973409  760639 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:37:42.973602  760639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
I0920 18:37:42.974208  760639 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:37:42.974785  760639 config.go:182] Loaded profile config "functional-023857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:37:42.975267  760639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:37:42.975309  760639 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:37:42.990447  760639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42979
I0920 18:37:42.990902  760639 main.go:141] libmachine: () Calling .GetVersion
I0920 18:37:42.991509  760639 main.go:141] libmachine: Using API Version  1
I0920 18:37:42.991534  760639 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:37:42.991894  760639 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:37:42.992110  760639 main.go:141] libmachine: (functional-023857) Calling .GetState
I0920 18:37:42.993722  760639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:37:42.993757  760639 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:37:43.009070  760639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
I0920 18:37:43.009584  760639 main.go:141] libmachine: () Calling .GetVersion
I0920 18:37:43.010085  760639 main.go:141] libmachine: Using API Version  1
I0920 18:37:43.010114  760639 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:37:43.010472  760639 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:37:43.010640  760639 main.go:141] libmachine: (functional-023857) Calling .DriverName
I0920 18:37:43.010886  760639 ssh_runner.go:195] Run: systemctl --version
I0920 18:37:43.010924  760639 main.go:141] libmachine: (functional-023857) Calling .GetSSHHostname
I0920 18:37:43.013417  760639 main.go:141] libmachine: (functional-023857) DBG | domain functional-023857 has defined MAC address 52:54:00:48:f3:a4 in network mk-functional-023857
I0920 18:37:43.013811  760639 main.go:141] libmachine: (functional-023857) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:f3:a4", ip: ""} in network mk-functional-023857: {Iface:virbr1 ExpiryTime:2024-09-20 19:34:11 +0000 UTC Type:0 Mac:52:54:00:48:f3:a4 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:functional-023857 Clientid:01:52:54:00:48:f3:a4}
I0920 18:37:43.013847  760639 main.go:141] libmachine: (functional-023857) DBG | domain functional-023857 has defined IP address 192.168.39.93 and MAC address 52:54:00:48:f3:a4 in network mk-functional-023857
I0920 18:37:43.013974  760639 main.go:141] libmachine: (functional-023857) Calling .GetSSHPort
I0920 18:37:43.014159  760639 main.go:141] libmachine: (functional-023857) Calling .GetSSHKeyPath
I0920 18:37:43.014297  760639 main.go:141] libmachine: (functional-023857) Calling .GetSSHUsername
I0920 18:37:43.014405  760639 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/functional-023857/id_rsa Username:docker}
I0920 18:37:43.093526  760639 build_images.go:161] Building image from path: /tmp/build.2432909881.tar
I0920 18:37:43.093597  760639 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 18:37:43.105464  760639 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2432909881.tar
I0920 18:37:43.110214  760639 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2432909881.tar: stat -c "%s %y" /var/lib/minikube/build/build.2432909881.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2432909881.tar': No such file or directory
I0920 18:37:43.110254  760639 ssh_runner.go:362] scp /tmp/build.2432909881.tar --> /var/lib/minikube/build/build.2432909881.tar (3072 bytes)
I0920 18:37:43.135543  760639 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2432909881
I0920 18:37:43.145306  760639 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2432909881 -xf /var/lib/minikube/build/build.2432909881.tar
I0920 18:37:43.155127  760639 crio.go:315] Building image: /var/lib/minikube/build/build.2432909881
I0920 18:37:43.155190  760639 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-023857 /var/lib/minikube/build/build.2432909881 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0920 18:37:44.718340  760639 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-023857 /var/lib/minikube/build/build.2432909881 --cgroup-manager=cgroupfs: (1.563121472s)
I0920 18:37:44.718430  760639 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2432909881
I0920 18:37:44.731816  760639 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2432909881.tar
I0920 18:37:44.742708  760639 build_images.go:217] Built localhost/my-image:functional-023857 from /tmp/build.2432909881.tar
I0920 18:37:44.742749  760639 build_images.go:133] succeeded building to: functional-023857
I0920 18:37:44.742755  760639 build_images.go:134] failed building to: 
I0920 18:37:44.742792  760639 main.go:141] libmachine: Making call to close driver server
I0920 18:37:44.742807  760639 main.go:141] libmachine: (functional-023857) Calling .Close
I0920 18:37:44.743116  760639 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:37:44.743138  760639 main.go:141] libmachine: (functional-023857) DBG | Closing plugin on server side
I0920 18:37:44.743152  760639 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:37:44.743166  760639 main.go:141] libmachine: Making call to close driver server
I0920 18:37:44.743172  760639 main.go:141] libmachine: (functional-023857) Calling .Close
I0920 18:37:44.743411  760639 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:37:44.743426  760639 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:37:44.743492  760639 main.go:141] libmachine: (functional-023857) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-023857
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image load --daemon kicbase/echo-server:functional-023857 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-023857 image load --daemon kicbase/echo-server:functional-023857 --alsologtostderr: (1.397200038s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image load --daemon kicbase/echo-server:functional-023857 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-023857
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image load --daemon kicbase/echo-server:functional-023857 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image save kicbase/echo-server:functional-023857 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image rm kicbase/echo-server:functional-023857 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-023857
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 image save --daemon kicbase/echo-server:functional-023857 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-023857
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 update-context --alsologtostderr -v=2
2024/09/20 18:38:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 service list -o json
functional_test.go:1494: Took "483.061306ms" to run "out/minikube-linux-amd64 -p functional-023857 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.93:31091
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (61.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-023857 /tmp/TestFunctionalparallelMountCmdany-port3047830962/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726857396275011806" to /tmp/TestFunctionalparallelMountCmdany-port3047830962/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726857396275011806" to /tmp/TestFunctionalparallelMountCmdany-port3047830962/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726857396275011806" to /tmp/TestFunctionalparallelMountCmdany-port3047830962/001/test-1726857396275011806
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023857 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (207.260235ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:36:36.482600  748497 retry.go:31] will retry after 405.522286ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 18:36 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 18:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 18:36 test-1726857396275011806
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh cat /mount-9p/test-1726857396275011806
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-023857 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f272a016-0500-4c42-a245-a79b4aa77359] Pending
helpers_test.go:344: "busybox-mount" [f272a016-0500-4c42-a245-a79b4aa77359] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f272a016-0500-4c42-a245-a79b4aa77359] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f272a016-0500-4c42-a245-a79b4aa77359] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 59.003607647s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-023857 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-023857 /tmp/TestFunctionalparallelMountCmdany-port3047830962/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (61.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.93:31091
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-023857 /tmp/TestFunctionalparallelMountCmdspecific-port2890524020/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023857 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (190.277556ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:37:37.836093  748497 retry.go:31] will retry after 570.614564ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-023857 /tmp/TestFunctionalparallelMountCmdspecific-port2890524020/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023857 ssh "sudo umount -f /mount-9p": exit status 1 (246.734178ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-023857 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-023857 /tmp/TestFunctionalparallelMountCmdspecific-port2890524020/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-023857 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2424216336/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-023857 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2424216336/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-023857 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2424216336/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023857 ssh "findmnt -T" /mount1: exit status 1 (265.904481ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:37:39.764176  748497 retry.go:31] will retry after 738.764874ms: exit status 1
E0920 18:37:39.765233  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-023857 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-023857 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-023857 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2424216336/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-023857 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2424216336/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-023857 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2424216336/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-023857
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-023857
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-023857
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (191.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-525790 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-525790 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m10.717113075s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (191.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-525790 -- rollout status deployment/busybox: (2.845766206s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-7jtss -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-jmx4g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-z26jr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-7jtss -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-jmx4g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-z26jr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-7jtss -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-jmx4g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-z26jr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-7jtss -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-7jtss -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-jmx4g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-jmx4g -- sh -c "ping -c 1 192.168.39.1"
E0920 18:49:55.905411  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-z26jr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-525790 -- exec busybox-7dff88458-z26jr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-525790 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-525790 -v=7 --alsologtostderr: (54.646035321s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-525790 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp testdata/cp-test.txt ha-525790:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790:/home/docker/cp-test.txt ha-525790-m02:/home/docker/cp-test_ha-525790_ha-525790-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m02 "sudo cat /home/docker/cp-test_ha-525790_ha-525790-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790:/home/docker/cp-test.txt ha-525790-m03:/home/docker/cp-test_ha-525790_ha-525790-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m03 "sudo cat /home/docker/cp-test_ha-525790_ha-525790-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790:/home/docker/cp-test.txt ha-525790-m04:/home/docker/cp-test_ha-525790_ha-525790-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m04 "sudo cat /home/docker/cp-test_ha-525790_ha-525790-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp testdata/cp-test.txt ha-525790-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790-m02:/home/docker/cp-test.txt ha-525790:/home/docker/cp-test_ha-525790-m02_ha-525790.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790 "sudo cat /home/docker/cp-test_ha-525790-m02_ha-525790.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790-m02:/home/docker/cp-test.txt ha-525790-m03:/home/docker/cp-test_ha-525790-m02_ha-525790-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m03 "sudo cat /home/docker/cp-test_ha-525790-m02_ha-525790-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790-m02:/home/docker/cp-test.txt ha-525790-m04:/home/docker/cp-test_ha-525790-m02_ha-525790-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m04 "sudo cat /home/docker/cp-test_ha-525790-m02_ha-525790-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp testdata/cp-test.txt ha-525790-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt ha-525790:/home/docker/cp-test_ha-525790-m03_ha-525790.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790 "sudo cat /home/docker/cp-test_ha-525790-m03_ha-525790.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt ha-525790-m02:/home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m02 "sudo cat /home/docker/cp-test_ha-525790-m03_ha-525790-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790-m03:/home/docker/cp-test.txt ha-525790-m04:/home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m04 "sudo cat /home/docker/cp-test_ha-525790-m03_ha-525790-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp testdata/cp-test.txt ha-525790-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3362703692/001/cp-test_ha-525790-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt ha-525790:/home/docker/cp-test_ha-525790-m04_ha-525790.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790 "sudo cat /home/docker/cp-test_ha-525790-m04_ha-525790.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt ha-525790-m02:/home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m02 "sudo cat /home/docker/cp-test_ha-525790-m04_ha-525790-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 cp ha-525790-m04:/home/docker/cp-test.txt ha-525790-m03:/home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 ssh -n ha-525790-m03 "sudo cat /home/docker/cp-test_ha-525790-m04_ha-525790-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.180334766s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (211.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-525790 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 19:11:24.184561  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-525790 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m30.412322385s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (211.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-525790 --control-plane -v=7 --alsologtostderr
E0920 19:14:55.904785  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-525790 --control-plane -v=7 --alsologtostderr: (1m18.039120036s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-525790 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (90.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-931674 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0920 19:16:24.180190  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-931674 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m30.455765683s)
--- PASS: TestJSONOutput/start/Command (90.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-931674 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-931674 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-931674 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-931674 --output=json --user=testUser: (7.367989787s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-908296 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-908296 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.181484ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f3e9f3aa-e3f4-4eb8-a0e8-7fa2fcd4d1f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-908296] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"81dfc5de-2477-4c24-ad96-8d198a8fc5fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"c976aed1-5a40-4dc0-8c51-0f625ac767a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"25229271-7869-4985-9a04-87b4fc36ba99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig"}}
	{"specversion":"1.0","id":"40b94b16-4f01-4974-a456-192497d34265","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube"}}
	{"specversion":"1.0","id":"59e764ba-2256-4cda-bd4c-72705174afae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"05cf4f00-12c3-47a6-9458-eb8b600f36ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d9ee9130-0f50-437e-97cf-1ec607ff5094","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-908296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-908296
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (88.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-622165 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-622165 --driver=kvm2  --container-runtime=crio: (42.694081214s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-640552 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-640552 --driver=kvm2  --container-runtime=crio: (42.856298258s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-622165
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-640552
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-640552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-640552
helpers_test.go:175: Cleaning up "first-622165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-622165
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-622165: (1.01369123s)
--- PASS: TestMinikubeProfile (88.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-094813 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-094813 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.335888175s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-094813 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-094813 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-108821 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-108821 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.835254809s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-108821 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-108821 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-094813 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-108821 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-108821 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-108821
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-108821: (1.656875217s)
--- PASS: TestMountStart/serial/Stop (1.66s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-108821
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-108821: (21.656405055s)
--- PASS: TestMountStart/serial/RestartStopped (22.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-108821 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-108821 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756894 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 19:19:55.905936  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:21:24.180112  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756894 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.281192169s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-756894 -- rollout status deployment/busybox: (2.011023474s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- exec busybox-7dff88458-7hxh2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- exec busybox-7dff88458-kr8zb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- exec busybox-7dff88458-7hxh2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- exec busybox-7dff88458-kr8zb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- exec busybox-7dff88458-7hxh2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- exec busybox-7dff88458-kr8zb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.55s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- exec busybox-7dff88458-7hxh2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- exec busybox-7dff88458-7hxh2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- exec busybox-7dff88458-kr8zb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756894 -- exec busybox-7dff88458-kr8zb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-756894 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-756894 -v 3 --alsologtostderr: (53.543491711s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.10s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-756894 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 cp testdata/cp-test.txt multinode-756894:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 cp multinode-756894:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3797588952/001/cp-test_multinode-756894.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 cp multinode-756894:/home/docker/cp-test.txt multinode-756894-m02:/home/docker/cp-test_multinode-756894_multinode-756894-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894-m02 "sudo cat /home/docker/cp-test_multinode-756894_multinode-756894-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 cp multinode-756894:/home/docker/cp-test.txt multinode-756894-m03:/home/docker/cp-test_multinode-756894_multinode-756894-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894-m03 "sudo cat /home/docker/cp-test_multinode-756894_multinode-756894-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 cp testdata/cp-test.txt multinode-756894-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 cp multinode-756894-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3797588952/001/cp-test_multinode-756894-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 cp multinode-756894-m02:/home/docker/cp-test.txt multinode-756894:/home/docker/cp-test_multinode-756894-m02_multinode-756894.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894 "sudo cat /home/docker/cp-test_multinode-756894-m02_multinode-756894.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 cp multinode-756894-m02:/home/docker/cp-test.txt multinode-756894-m03:/home/docker/cp-test_multinode-756894-m02_multinode-756894-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894-m03 "sudo cat /home/docker/cp-test_multinode-756894-m02_multinode-756894-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 cp testdata/cp-test.txt multinode-756894-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 cp multinode-756894-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3797588952/001/cp-test_multinode-756894-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 cp multinode-756894-m03:/home/docker/cp-test.txt multinode-756894:/home/docker/cp-test_multinode-756894-m03_multinode-756894.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894 "sudo cat /home/docker/cp-test_multinode-756894-m03_multinode-756894.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 cp multinode-756894-m03:/home/docker/cp-test.txt multinode-756894-m02:/home/docker/cp-test_multinode-756894-m03_multinode-756894-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 ssh -n multinode-756894-m02 "sudo cat /home/docker/cp-test_multinode-756894-m03_multinode-756894-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-756894 node stop m03: (1.440257556s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756894 status: exit status 7 (423.701202ms)

                                                
                                                
-- stdout --
	multinode-756894
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-756894-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-756894-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756894 status --alsologtostderr: exit status 7 (429.538686ms)

                                                
                                                
-- stdout --
	multinode-756894
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-756894-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-756894-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:22:42.074359  781359 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:22:42.074508  781359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:22:42.074531  781359 out.go:358] Setting ErrFile to fd 2...
	I0920 19:22:42.074538  781359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:22:42.074726  781359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
	I0920 19:22:42.074945  781359 out.go:352] Setting JSON to false
	I0920 19:22:42.074990  781359 mustload.go:65] Loading cluster: multinode-756894
	I0920 19:22:42.075097  781359 notify.go:220] Checking for updates...
	I0920 19:22:42.075461  781359 config.go:182] Loaded profile config "multinode-756894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:22:42.075485  781359 status.go:174] checking status of multinode-756894 ...
	I0920 19:22:42.075956  781359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:22:42.076007  781359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:22:42.095954  781359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0920 19:22:42.096416  781359 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:22:42.097053  781359 main.go:141] libmachine: Using API Version  1
	I0920 19:22:42.097082  781359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:22:42.097427  781359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:22:42.097622  781359 main.go:141] libmachine: (multinode-756894) Calling .GetState
	I0920 19:22:42.099329  781359 status.go:364] multinode-756894 host status = "Running" (err=<nil>)
	I0920 19:22:42.099348  781359 host.go:66] Checking if "multinode-756894" exists ...
	I0920 19:22:42.099643  781359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:22:42.099705  781359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:22:42.114776  781359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46847
	I0920 19:22:42.115236  781359 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:22:42.115718  781359 main.go:141] libmachine: Using API Version  1
	I0920 19:22:42.115744  781359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:22:42.116035  781359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:22:42.116200  781359 main.go:141] libmachine: (multinode-756894) Calling .GetIP
	I0920 19:22:42.118713  781359 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:22:42.119123  781359 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:22:42.119148  781359 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:22:42.119262  781359 host.go:66] Checking if "multinode-756894" exists ...
	I0920 19:22:42.119563  781359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:22:42.119604  781359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:22:42.135007  781359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
	I0920 19:22:42.135408  781359 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:22:42.135876  781359 main.go:141] libmachine: Using API Version  1
	I0920 19:22:42.135900  781359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:22:42.136222  781359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:22:42.136409  781359 main.go:141] libmachine: (multinode-756894) Calling .DriverName
	I0920 19:22:42.136600  781359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:22:42.136623  781359 main.go:141] libmachine: (multinode-756894) Calling .GetSSHHostname
	I0920 19:22:42.139130  781359 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:22:42.139529  781359 main.go:141] libmachine: (multinode-756894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:41:fc", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:19:52 +0000 UTC Type:0 Mac:52:54:00:69:41:fc Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-756894 Clientid:01:52:54:00:69:41:fc}
	I0920 19:22:42.139564  781359 main.go:141] libmachine: (multinode-756894) DBG | domain multinode-756894 has defined IP address 192.168.39.168 and MAC address 52:54:00:69:41:fc in network mk-multinode-756894
	I0920 19:22:42.139712  781359 main.go:141] libmachine: (multinode-756894) Calling .GetSSHPort
	I0920 19:22:42.139905  781359 main.go:141] libmachine: (multinode-756894) Calling .GetSSHKeyPath
	I0920 19:22:42.140056  781359 main.go:141] libmachine: (multinode-756894) Calling .GetSSHUsername
	I0920 19:22:42.140195  781359 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/multinode-756894/id_rsa Username:docker}
	I0920 19:22:42.226910  781359 ssh_runner.go:195] Run: systemctl --version
	I0920 19:22:42.233489  781359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:22:42.253601  781359 kubeconfig.go:125] found "multinode-756894" server: "https://192.168.39.168:8443"
	I0920 19:22:42.253637  781359 api_server.go:166] Checking apiserver status ...
	I0920 19:22:42.253678  781359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:22:42.267650  781359 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1086/cgroup
	W0920 19:22:42.277349  781359 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1086/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:22:42.277431  781359 ssh_runner.go:195] Run: ls
	I0920 19:22:42.282976  781359 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
	I0920 19:22:42.286898  781359 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
	ok
	I0920 19:22:42.286927  781359 status.go:456] multinode-756894 apiserver status = Running (err=<nil>)
	I0920 19:22:42.286940  781359 status.go:176] multinode-756894 status: &{Name:multinode-756894 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:22:42.286961  781359 status.go:174] checking status of multinode-756894-m02 ...
	I0920 19:22:42.287247  781359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:22:42.287285  781359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:22:42.303015  781359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0920 19:22:42.303442  781359 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:22:42.303893  781359 main.go:141] libmachine: Using API Version  1
	I0920 19:22:42.303919  781359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:22:42.304193  781359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:22:42.304397  781359 main.go:141] libmachine: (multinode-756894-m02) Calling .GetState
	I0920 19:22:42.305953  781359 status.go:364] multinode-756894-m02 host status = "Running" (err=<nil>)
	I0920 19:22:42.305972  781359 host.go:66] Checking if "multinode-756894-m02" exists ...
	I0920 19:22:42.306270  781359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:22:42.306306  781359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:22:42.321937  781359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37747
	I0920 19:22:42.322321  781359 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:22:42.322797  781359 main.go:141] libmachine: Using API Version  1
	I0920 19:22:42.322819  781359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:22:42.323126  781359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:22:42.323293  781359 main.go:141] libmachine: (multinode-756894-m02) Calling .GetIP
	I0920 19:22:42.326087  781359 main.go:141] libmachine: (multinode-756894-m02) DBG | domain multinode-756894-m02 has defined MAC address 52:54:00:51:1a:5d in network mk-multinode-756894
	I0920 19:22:42.326497  781359 main.go:141] libmachine: (multinode-756894-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1a:5d", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:20:55 +0000 UTC Type:0 Mac:52:54:00:51:1a:5d Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:multinode-756894-m02 Clientid:01:52:54:00:51:1a:5d}
	I0920 19:22:42.326537  781359 main.go:141] libmachine: (multinode-756894-m02) DBG | domain multinode-756894-m02 has defined IP address 192.168.39.204 and MAC address 52:54:00:51:1a:5d in network mk-multinode-756894
	I0920 19:22:42.326642  781359 host.go:66] Checking if "multinode-756894-m02" exists ...
	I0920 19:22:42.327040  781359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:22:42.327078  781359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:22:42.343437  781359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39687
	I0920 19:22:42.343846  781359 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:22:42.344300  781359 main.go:141] libmachine: Using API Version  1
	I0920 19:22:42.344321  781359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:22:42.344674  781359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:22:42.344878  781359 main.go:141] libmachine: (multinode-756894-m02) Calling .DriverName
	I0920 19:22:42.345075  781359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:22:42.345104  781359 main.go:141] libmachine: (multinode-756894-m02) Calling .GetSSHHostname
	I0920 19:22:42.347909  781359 main.go:141] libmachine: (multinode-756894-m02) DBG | domain multinode-756894-m02 has defined MAC address 52:54:00:51:1a:5d in network mk-multinode-756894
	I0920 19:22:42.348297  781359 main.go:141] libmachine: (multinode-756894-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1a:5d", ip: ""} in network mk-multinode-756894: {Iface:virbr1 ExpiryTime:2024-09-20 20:20:55 +0000 UTC Type:0 Mac:52:54:00:51:1a:5d Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:multinode-756894-m02 Clientid:01:52:54:00:51:1a:5d}
	I0920 19:22:42.348322  781359 main.go:141] libmachine: (multinode-756894-m02) DBG | domain multinode-756894-m02 has defined IP address 192.168.39.204 and MAC address 52:54:00:51:1a:5d in network mk-multinode-756894
	I0920 19:22:42.348470  781359 main.go:141] libmachine: (multinode-756894-m02) Calling .GetSSHPort
	I0920 19:22:42.348643  781359 main.go:141] libmachine: (multinode-756894-m02) Calling .GetSSHKeyPath
	I0920 19:22:42.348771  781359 main.go:141] libmachine: (multinode-756894-m02) Calling .GetSSHUsername
	I0920 19:22:42.348923  781359 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/multinode-756894-m02/id_rsa Username:docker}
	I0920 19:22:42.425790  781359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:22:42.439526  781359 status.go:176] multinode-756894-m02 status: &{Name:multinode-756894-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:22:42.439563  781359 status.go:174] checking status of multinode-756894-m03 ...
	I0920 19:22:42.439944  781359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 19:22:42.439986  781359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 19:22:42.456114  781359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40493
	I0920 19:22:42.456616  781359 main.go:141] libmachine: () Calling .GetVersion
	I0920 19:22:42.457133  781359 main.go:141] libmachine: Using API Version  1
	I0920 19:22:42.457156  781359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 19:22:42.457459  781359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 19:22:42.457652  781359 main.go:141] libmachine: (multinode-756894-m03) Calling .GetState
	I0920 19:22:42.459170  781359 status.go:364] multinode-756894-m03 host status = "Stopped" (err=<nil>)
	I0920 19:22:42.459186  781359 status.go:377] host is not running, skipping remaining checks
	I0920 19:22:42.459193  781359 status.go:176] multinode-756894-m03 status: &{Name:multinode-756894-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-756894 node start m03 -v=7 --alsologtostderr: (36.739805616s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-756894 node delete m03: (1.705217273s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (179.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756894 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 19:31:24.184041  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/functional-023857/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756894 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m58.955452706s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756894 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (179.48s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-756894
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756894-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-756894-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (65.024255ms)

                                                
                                                
-- stdout --
	* [multinode-756894-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-756894-m02' is duplicated with machine name 'multinode-756894-m02' in profile 'multinode-756894'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756894-m03 --driver=kvm2  --container-runtime=crio
E0920 19:34:55.905292  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756894-m03 --driver=kvm2  --container-runtime=crio: (43.494193371s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-756894
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-756894: exit status 80 (215.793218ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-756894 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-756894-m03 already exists in multinode-756894-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-756894-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.61s)

                                                
                                    
x
+
TestScheduledStopUnix (115.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-907656 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-907656 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.472321345s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-907656 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-907656 -n scheduled-stop-907656
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-907656 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 19:38:31.716682  748497 retry.go:31] will retry after 138.775µs: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.717858  748497 retry.go:31] will retry after 80.439µs: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.719014  748497 retry.go:31] will retry after 313.3µs: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.720142  748497 retry.go:31] will retry after 410.069µs: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.721273  748497 retry.go:31] will retry after 411.044µs: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.722381  748497 retry.go:31] will retry after 411.629µs: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.723509  748497 retry.go:31] will retry after 720.294µs: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.724644  748497 retry.go:31] will retry after 1.86541ms: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.726890  748497 retry.go:31] will retry after 2.15091ms: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.730152  748497 retry.go:31] will retry after 5.471307ms: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.736410  748497 retry.go:31] will retry after 3.564396ms: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.740633  748497 retry.go:31] will retry after 9.956949ms: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.750907  748497 retry.go:31] will retry after 9.616345ms: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.761139  748497 retry.go:31] will retry after 19.773792ms: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
I0920 19:38:31.781431  748497 retry.go:31] will retry after 21.824059ms: open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/scheduled-stop-907656/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-907656 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-907656 -n scheduled-stop-907656
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-907656
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-907656 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-907656
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-907656: exit status 7 (62.550324ms)

                                                
                                                
-- stdout --
	scheduled-stop-907656
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-907656 -n scheduled-stop-907656
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-907656 -n scheduled-stop-907656: exit status 7 (62.250681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-907656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-907656
--- PASS: TestScheduledStopUnix (115.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (181.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.223426940 start -p running-upgrade-666227 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.223426940 start -p running-upgrade-666227 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m18.28106305s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-666227 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-666227 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m41.290740626s)
helpers_test.go:175: Cleaning up "running-upgrade-666227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-666227
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-666227: (1.254847837s)
--- PASS: TestRunningBinaryUpgrade (181.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-677486 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-677486 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (90.892963ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-677486] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (119.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-677486 --driver=kvm2  --container-runtime=crio
E0920 19:39:55.904511  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-677486 --driver=kvm2  --container-runtime=crio: (1m59.524346401s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-677486 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (119.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (44.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-677486 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-677486 --no-kubernetes --driver=kvm2  --container-runtime=crio: (43.301492394s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-677486 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-677486 status -o json: exit status 2 (236.619738ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-677486","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-677486
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-677486: (1.085625858s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (44.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (25.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-677486 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-677486 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.604348464s)
--- PASS: TestNoKubernetes/serial/Start (25.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-677486 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-677486 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.726173ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-677486
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-677486: (1.547774404s)
--- PASS: TestNoKubernetes/serial/Stop (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (68.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-677486 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-677486 --driver=kvm2  --container-runtime=crio: (1m8.812680916s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (68.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-677486 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-677486 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.538824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (109.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2404797341 start -p stopped-upgrade-690942 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2404797341 start -p stopped-upgrade-690942 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m4.971441828s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2404797341 -p stopped-upgrade-690942 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2404797341 -p stopped-upgrade-690942 stop: (2.142153032s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-690942 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-690942 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.873054333s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (109.99s)

                                                
                                    
x
+
TestPause/serial/Start (82.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-389954 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0920 19:44:55.905166  748497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-389954 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m22.411374443s)
--- PASS: TestPause/serial/Start (82.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-690942
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    

Test skip (34/221)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard